We have spent the better part of two decades training people to spot bad emails. Look for typos. Check the sender domain. Don’t click suspicious links. And we got pretty good at it. Spam filters improved. Security awareness training became a standard corporate ritual. Employees learned to hover over links before clicking.
Then attackers stopped caring about email.
Social engineering in 2026 is not about tricking someone into clicking a link. It is about understanding how people think how they respond to authority, urgency, familiarity, and trust and exploiting those instincts with surgical precision. The toolkit has grown far beyond phishing. And the psychology underneath it has barely changed at all, because human psychology rarely does.
The new playing field
The attacks making headlines today don’t look like the attacks from five years ago. They are slower, more patient, and deeply personal. Attackers now spend weeks sometimes months studying a target before making contact. They read LinkedIn profiles, listen to earnings calls, and scroll through public social media to learn someone’s routine, vocabulary, their manager’s name, and what time zone they work in.
When they finally reach out, they don’t sound like a scammer. They sound like a colleague.
AI-generated voice cloning and deepfake video have changed the game in ways most organizations are not yet prepared for. It is now possible, with a few minutes of recorded audio, to clone a person’s voice convincingly enough to pass a casual phone call. The technology to do this costs almost nothing and requires very little skill. And it is already being used.
Case study: HK$18.5M AI voice cloning scam Hong Kong, January 2025
In January 2025, fraudsters in Hong Kong used AI-generated voice cloning to impersonate a company’s finance manager over WhatsApp. The fake voice was convincing enough to authorize a cryptocurrency transfer worth HK$18.5 million roughly $2.3 million USD.
No system was broken into. No malware was deployed. The attack succeeded entirely because the victim heard a voice they recognized and trusted. The finance manager whose voice was cloned had no idea it had happened until the money was gone. This is the new shape of financial fraud. The barrier to entry is a few minutes of publicly available audio and an off-the-shelf AI tool. What was once science fiction is now a line item in a criminal’s toolkit.
When a phone call costs £300 million
If you want to understand just how far-reaching these attacks can be, look at what happened to Marks & Spencer in April 2025. One of the UK’s most iconic retailers 65,000 employees, over 1,400 stores was brought to its knees not by a sophisticated technical exploit, but by a phone call to a help desk.
The group known as Scattered Spider began infiltrating M&S’s systems as early as February 2025. Their entry point was not a firewall or unpatched server. They called the IT help desk operated by a third-party vendor, impersonated an M&S employee, and talked their way into a password reset.
With those credentials, they moved laterally through the network, harvested user data, and eventually deployed ransomware that encrypted critical systems. Online shopping was suspended for weeks. Over £500 million was wiped from M&S’s market value, with operating profits expected to fall by around £300 million for the year.
Co-op and Harrods were hit by the same group using near-identical tactics around the same time. Four arrests have since been made, but the total financial impact across the three retailers has been estimated at up to £440 million.
Why our brains keep falling for it
The reason social engineering works is not because people are careless. It is because our instincts the shortcuts our brains use to make fast decisions were not built for this environment.
We are wired to trust authority. When someone appears to be a senior leader, we feel a pull to comply, especially when they convey urgency. We are wired to trust familiarity. When someone sounds like they know us, references our colleagues, and speaks in the right industry language, our guard drops. We are wired to avoid conflict. When someone pushes, we are more likely to give in than to risk an awkward confrontation.
Attackers exploit all of this deliberately. They create artificial time pressure to stop you from thinking clearly. They invoke authority to override your instinct to verify. They make saying no feel like a betrayal of your own team. AI-supported phishing represented more than 80% of observed social engineering activity worldwide by early 2025, according to ENISA. Attacks that once took weeks of manual research can now be automated. A criminal group can profile a target, clone a voice, craft a personalized message, and launch an attack in hours.
What defense actually looks like now
The organizations getting this right have stopped treating security awareness as a once-a-year checkbox. They are building cultures where verification is normalized where it is not rude to double-check, and where a second confirmation step on a sensitive request is standard, not an insult.
- Call back on a number you already have not the one they gave you.
- Help desks should never reset credentials for privileged accounts without a secondary approval workflow.
- Treat urgency as a warning signal. Legitimate requests survive a ten-minute delay. Fraudulent ones cannot.
- Establish a pre-agreed verbal passphrase for out-of-the-ordinary financial instructions.
- Run vishing (voice phishing) drills and fake executive impersonation simulations not just email phishing tests.
The honest conclusion
We cannot engineer our way out of this problem. There is no software patch for human trust. No firewall that blocks a convincing phone call. No antivirus that catches a well-researched impersonation. The M&S breach and the Hong Kong voice scam are not outliers they are previews.
What we can do is design systems and cultures that don’t rely on any single person getting it right under pressure. The organizations that will hold up are not the ones with the most sophisticated tools. They are the ones that have made skepticism normal, verification easy, and the cost of a brief pause much lower than the cost of a catastrophic mistake.
The attackers understand human psychology deeply. It is time defenders use that same understanding to build their defenses.


