If AI were a person, it would be wearing a really nice suit, smiling politely, and asking you for your bank details while holding a clipboard.
AI didn't invent fraud. Humans have been scamming each other since the invention of language.
But AI just automated it, scaled it to infinity, and gave it a perfect British accent. 🎩
🎤 The $25 Million Video Call Heist
This is a true story from Hong Kong in 2024. It still keeps me up at night.
A finance employee at a multinational firm received an email from the company's UK-based CFO concerning a "secret transaction."
The employee was suspicious. Good! He did the right thing. He asked for a video call to verify.
He joined the video call.
On the screen were the CFO and several other colleagues he knew personally. They looked real. They sounded real. They interacted with each other.
They told him to transfer $25 million (HK$200 million) to various accounts.
He did it.
Every single person on that video call was a deepfake.
The employee was the only human in the room. He was arguing with ghosts. And the ghosts won.
Did you know? The scammers used publicly available footage of the executives to train the AI models. They cloned their voices and faces. The Deepfake era isn't "coming soon." It cost one company $25 million last year.
📧 Phishing at Industrial Scale (Spear Phishing for Everyone)
Remember when phishing emails were obvious?
"Dear Sir/Madam, I am Prince of Nigeria requiring assistance of bank transfer."
(Narrator: The typos were the only firewall we had.)
Now, AI writes phishing emails that are:
- Grammatically perfect. (Thanks, Grammarly/ChatGPT)
- Contextually aware. (It scraped your LinkedIn and knows you just started a new job as a Junior Dev)
- Personalized. (It references your boss by name)
The New Attack Vector:
- AI scrapes your company org chart.
- AI sees you report to Sarah.
- AI waits until Sarah posts on Twitter that she's at a conference.
- AI emails you (spoofed as Sarah): "Hey, I'm stuck at the conference and need you to approve this vendor invoice quickly."
It's not "spray and pray" anymore. It's a sniper rifle aimed by an algorithm.
🧠 The Fraud Menu 2026
Here's what constitutes a "crime starter pack" now:
| Crime | AI Contribution | Terror Level |
|---|---|---|
| CEO Cloning | Clone any voice with 3 seconds of audio. Call employees and demand wire transfers. | 🌶️🌶️🌶️🌶️ |
| Invoice Fraud | Generate 1,000 unique, convincing fake invoices for accounts payable departments. | 🌶️🌶️ |
| Romance Scams | AI boyfriends/girlfriends that never sleep, remember every detail, and slowly drain savings. | 🌶️🌶️🌶️ |
| Identity Theft | Synthesize consistent fake faces (that don't exist) to pass KYC (Know Your Customer) checks. | 🌶️🌶️🌶️ |
It's Ocean's 11, but Ocean is a Python script and the 11 accomplices are just GPU clusters.
🎭 The "Proof of Humanity" Crisis
We are heading toward a crisis of verification.
If you can't trust video... If you can't trust audio... If you can't trust text...
What creates trust?
We might have to go back to:
- "Meet me in person to sign this."
- "Here is a physical key."
- "I will rely on a shared secret we established offline."
We spent 20 years digitizing everything for convenience. We might spend the next 10 undigitizing things for security.
Did you know? Some families are now establishing "safe words" so if "Mom" calls from a strange number saying she's kidnapped and needs money, the kid knows to ask for the code word. That's where we are.
🎯 My Take
AI is a tool. Tools amplify human intent.
When a doctor uses AI, it amplifies healing. When a scammer uses AI, it amplifies theft.
The problem isn't the tech. The problem is that our entire security infrastructure relies on "If I see/hear you, I trust you."
That biological assumption is now a security vulnerability.
The patch for that vulnerability isn't software. It's skepticism.
Trust nothing. Verify everything.
And if I call you asking for money? Ask me for the safe word. (It's "Typescript," obviously.) 🌙