March 17, 2025
How is GenAI Used in Phishing Campaigns?
Generative AI (GenAI) has fundamentally transformed the phishing landscape, enabling cybercriminals to create sophisticated, personalized attacks at unprecedented scale and speed. What once required significant technical skills, time, and resources can now be accomplished in seconds by anyone with access to AI tools and basic prompting knowledge.
According to cybersecurity vendor Perception Point's 2024 Annual Report, only 1% of attacks in 2022 utilized GenAI, but that number jumped to 18.6% in 2023 - representing a 1760% increase. This dramatic surge reflects how quickly cybercriminals have adopted and weaponized AI technologies for malicious purposes.
The GenAI Revolution in Cybercrime
Lowered Barriers to Entry
With generative AI, it's easier than ever for cybercriminals to separate people and companies from their money and data. Low-cost, easy-to-use tools coupled with a proliferation of public-facing data make for an expanding threat landscape. Someone with no coding, design, or writing experience can level up in seconds as long as they know how to prompt.
Automation and Scale AI's automation capabilities also mean that bad actors can more easily scale operations, such as phishing campaigns, which until recently were tedious, manual, and expensive undertakings. As the volume of attacks increases, so does the probability of an attack's success.
The Economic Impact
The financial implications are staggering. CyberSecurity Ventures predicted the global annual cost of cybercrime to run about $3 trillion a year in 2015, but by October 2023, they expect global cybercrime damage costs to grow by 15 percent per year over the next two years, reaching $10.5 trillion USD annually by 2025. Generative AI is significantly contributing to this escalation.
Key Ways GenAI Enhances Phishing Campaigns
Voice Cloning and Audio Manipulation
It now takes fewer than 3 seconds of audio for cybercriminals using generative AI to clone someone's voice, which they use to trick family members into thinking a loved one is hurt or in trouble or banking staff to transfer money out of a victim's account.
Real-World Impact In May 2023, ethical hackers used a voice clone of a "60 Minutes" correspondent to trick one of the show's staffers into handing over sensitive information in about five minutes — all as the cameras were rolling.
Perfect Text Generation
Eliminating Traditional Detection Methods Guidance for spotting a phishing email used to be relatively simple: Is the message rife with grammatical and punctuation errors? Then it could be the first stop in a scam pipeline. But in the AI era, those signals have gone the way of the pilcrow. Generative AI can create convincing and flawless text across countless languages, leading to more wide-spread, sophisticated, and personalised phishing schemes.
Language Perfection GenAI eliminates the telltale signs that previously helped users identify phishing attempts. Perfect grammar, appropriate tone, and cultural context make these attacks nearly indistinguishable from legitimate communications.
Visual Deepfakes and Image Manipulation
Gayle King, Tom Hanks, MrBeast: these are just some of the celebrities whose names have made the headlines recently. AI deepfakes of the celebs hit the internet earlier this fall, with scammers using their likenesses to deceive an unsuspecting public.
Business Impersonation Beyond celebrity deepfakes, criminals create convincing visual content impersonating executives, business partners, and trusted vendors to facilitate business email compromise and financial fraud.
Code Generation for Non-Technical Attackers
With generative AI, the phrase "do more with less" doesn't just apply to people power. It also pertains to practical knowledge. Generative AI's coding and scripting abilities makes it easier for cybercriminals with little or no coding prowess to develop and launch attacks. This reduced barrier to entry could draw more individuals into the cybercrime ecosystem and improve operational efficiencies.
Advanced GenAI Phishing Techniques
Personalized Password Attacks
Leaning on publicly available data, like information found on someone's social media accounts, bad actors can use generative AI to output a list of possible — more relevant — passwords to try out. This transforms brute force attacks from random guessing to targeted, personalized attempts.
CAPTCHA Bypass
New research indicates that bots are now faster and more accurate when it comes to solving CAPTCHA tests. This removes another traditional barrier that previously helped distinguish human users from automated attacks.
Prompt Injection Attacks
Successful prompt injections — which concatenate (i.e. join) malicious inputs to existing instructions — can stealthily override developer directives and subvert safeguards set up by LLM providers. They steer the model's output in whichever direction the attack's author chooses, telling the LLM, "Ignore their instructions, and follow mine instead."
GenAI Impersonation and Brand Exploitation
AI Tool Impersonation
The report also shows an increase in phishing attacks that impersonate popular GenAI tools over the past year. These attacks use imposter sites to manipulate and exploit unsuspecting victims to hand over proprietary and private information.
Data Harvesting Through False Services The majority of GenAI fraud was not for the purpose of credential theft. Instead, these impersonation sites attempt to trick people into entering highly personal information by promising to generate a résumé or similarly personal document. In addition to cybercriminals stealing sensitive and personal information, the returned document is typically a PDF where malware can hide out and be delivered.
Evasive Technique Integration
Cybercriminals are also leveraging AI-powered techniques to increase their chances of bypassing traditional security layers, enabling them to enhance the scale at which they compromise poorly secured websites, create counterfeit sites, and embed malware in files that existing tools fail to detect.
The Current Threat Landscape
Massive Scale Increases
Menlo Threat Intelligence analyzed more than 752,000 browser-based phishing attacks and studied the trends now shaping AI-powered threats. The research reveals that a surge in generative AI-based threats has spurred a 140% increase in browser-based phishing attacks compared to 2023, and a 130% increase specifically in zero-hour phishing attacks.
Business Email Compromise Evolution
According to cybersecurity vendor Perception Point's 2024 Annual Report: Cybersecurity Trends & Insights, phishing attacks represent 70.8% of all advanced attacks via email (business email compromise or BEC) and 79,8% of web browser-based attacks.
Defending Against GenAI-Enhanced Phishing
The AI Arms Race
The best defence against AI is AI. As bad actors ramp up their efforts, it's the legitimate businesses that have embraced AI that stand the best chance of defending against these attacks.
Education and Awareness
An engaged workforce is a more vigilant workforce. Provide employees with the space to learn about and experiment with generative AI tools — but not before educating them about best practices and establishing company-wide guardrails that protect and manage the risks associated with generative AI.
Modern Training Requirements It becomes that much more imperative that employees be enrolled in new-school security awareness training so they can interact with every email with a sense of vigilance and scrutiny, helping to reduce the likelihood of a successful phishing attack.
Technical Defenses
Advanced Detection Systems Organizations need security solutions that can detect AI-generated content and identify sophisticated manipulation techniques that traditional filters miss.
Behavioral Analysis Since GenAI can create perfect-looking content, security systems must focus on behavioral patterns and anomalies rather than traditional content-based detection methods.
Why GenAI Phishing Matters for MSPs
Amplified Client Risk
For managed service providers, GenAI-enhanced phishing represents a multiplied threat across all client environments. The sophistication and scale of AI-powered attacks mean that even security-conscious clients face significantly elevated risks.
Multi-Client Attack Vectors • AI-generated spear-phishing targeting specific client industries • Automated attacks that can adapt to different client environments • Voice cloning attacks impersonating client executives or MSP staff • Deepfake videos used for business email compromise across multiple clients
Operational Impact on MSPs
Incident Response Complexity GenAI attacks are more sophisticated and harder to detect, requiring: • Advanced forensic capabilities to identify AI-generated content • Specialized training for support staff to recognize new attack patterns • Enhanced client communication about evolving threat landscapes • More complex investigation and remediation procedures
Compliance and Liability Concerns • Clients in regulated industries face enhanced scrutiny for AI-related security incidents • MSPs must demonstrate awareness and protection against AI-enhanced threats • Documentation requirements for AI-related security measures and training • Potential liability for failing to protect against known AI attack vectors
Competitive Differentiation
Advanced Protection Services MSPs that understand and protect against GenAI threats can offer: • AI-aware security awareness training programs • Advanced threat detection specifically designed for AI-generated attacks • Proactive monitoring for voice cloning and deepfake attempts • Specialized incident response for AI-enhanced attacks
Client Education Leadership MSPs that stay ahead of GenAI threats can position themselves as trusted advisors by: • Educating clients about evolving AI attack techniques • Providing regular updates on new GenAI threat vectors • Offering specialized training for client employees on AI threat recognition • Demonstrating expertise in emerging cybersecurity challenges
Future Implications
Escalating Sophistication
As the adoption of generative AI tools continues to grow, and the applications themselves become more advanced, companies and individuals will likely see cybercriminals deploy more and more attacks. Cybercriminals' efforts will result in highly customised attacks on specific targets that scammers can launch at scale automatically — flooding the digital world with one click.
Detection Challenges
Determining what's real and what's synthetic will only become more difficult. This creates an ongoing challenge for both technical security systems and human recognition capabilities.
Conclusion
The integration of GenAI into phishing campaigns represents a fundamental shift in the cybersecurity threat landscape. The 1760% increase in AI-enhanced attacks from 2022 to 2023 demonstrates how quickly cybercriminals adapt and weaponize new technologies.
For MSPs, understanding GenAI phishing techniques is crucial for protecting clients and maintaining competitive positioning. The combination of perfect text generation, voice cloning, visual deepfakes, and automated scaling creates unprecedented challenges that require both advanced technical defenses and comprehensive human risk management strategies.
Success in defending against GenAI phishing requires embracing AI-powered security solutions while investing heavily in employee education and awareness. As the technology continues to evolve, organizations must remain vigilant and adaptive to stay ahead of increasingly sophisticated threats.
The future belongs to those who can harness AI's defensive capabilities while building human resilience against AI-powered attacks. MSPs that master this balance will provide superior protection for their clients while establishing themselves as leaders in the evolving cybersecurity landscape.
Protect your MSP clients from sophisticated GenAI-enhanced phishing attacks with advanced security solutions that combine AI-powered detection and personalized human risk management training designed for the AI era at Kinds Security.