AI is supercharging classic cyberattacks—think hyper-personalized phishing, believable deepfakes, and faster malware iteration. CrowdStrike documents how adversaries are already using AI to scale speed and volume, not invent entirely new hacks (source).
What AI-powered attacks look like today
- Phishing at scale: Native-language emails and chats tailored to role, tech stack, and recent projects.
- Deepfake social engineering: Voice/video spoofs to rush payments, share credentials, or bypass identity checks.
- Malware iteration: Faster testing of obfuscation and evasion against endpoint defenses.
- Credential attacks: Automated password spraying, combo-stuffing, and token replay accelerated by AI tooling.
- Recon & targeting: LLMs summarize public data to map org charts, suppliers, and exposed assets for precision targeting.
- App & data risks: Prompt injection against internal chatbots; data exfiltration via misconfigured AI tools.
What’s real vs. hype
AI lowers the cost and increases the speed of existing TTPs. Attackers blend AI content generation with tried-and-true techniques like MFA fatigue and business email compromise.
Defenders should focus on control coverage and playbooks, not novelty alone. The biggest wins come from identity, email, endpoint, and process rigor.
7 defenses that work now
- Deploy phishing-resistant MFA everywhere: Prefer FIDO2/passkeys for admins and high-risk users; pair with conditional access and device posture checks (CISA).
- Harden email and domains: Enforce SPF, DKIM, and DMARC at p=reject; monitor lookalike domains; add supplier allow-lists and payment verification callbacks.
- Build deepfake-resilient processes: Use out-of-band callbacks to verified numbers, shared code words for urgent requests, and require two-person approval on wire transfers.
- Level up endpoint defenses: Use behavior-based EDR/XDR, block macros and unsigned code, patch fast, and enforce least privilege with app control.
- Secure your GenAI apps: Apply allow-listed tools, output filtering, and strict data access; monitor logs; adopt the OWASP LLM Top 10 to mitigate prompt injection and data leakage.
- Protect data paths: DLP for chat and browser uploads, secrets scanning in repos, and RAG pipelines restricted to approved sources with strong access controls.
- Prepare the team: Run realistic AI-enabled phishing and deepfake drills; update IR playbooks with verification steps and comms templates for executive impersonation.
Quick wins this week
- Set DMARC to quarantine or reject; turn on passkeys for admins.
- Add a mandatory call-back rule for any payment or data request made over chat, email, or video.
- Block high-risk file types at the email gateway; disable Office macros from the internet.
Key takeaway
AI amplifies attacker productivity. Your best defense is disciplined identity, hardened endpoints, trustworthy processes, and clear playbooks—augmented by selective, proven AI tools.
Sources
- CrowdStrike: AI-Powered Cyberattacks
- CISA: Phishing-Resistant MFA
- OWASP: Top 10 for LLM Applications
Like this? Get 2-minute, no-fluff AI insights in your inbox. Subscribe to The AI Nuggets.