15 minutes on the phone with a robot.
15 minutes on the phone with a robot.
Last week, I spoke with a company using AI voice agents to handle outbound sales.
One detail stuck with me:
Their average call duration?
Over 15 minutes.
That’s a long time to keep someone engaged — especially when the person on the other end knows they’re speaking to an AI.
Now, everything about this use case was "ethical" and "transparent".
But it reminded me of a different story...
In May 2024, an employee at Arup — a British engineering firm — was tricked into transferring HK$200M (£20M) to fraudsters.
How was the employee fooled?
A deepfake video call featuring multiple AI-generated “colleagues”.
It was synthetic media — generated with AI. And it was convincing enough to result in a person sending millions.
The Alan Turing Institute recently published a report on how AI is accelerating certain forms of crime.
Three trends stood out:
→ Criminal innovation
Multimodal models make it easier than ever to generate realistic synthetic content — voices, faces, full conversations.
→ Upgraded old tactics
Phishing, social engineering, payment fraud — now layered with AI that adds nuance, believability, and scale.
→ Exploitation of trust
Familiar voices. Known faces. AI weaponises social signals we instinctively trust — urgency, authority, emotional cues.
So what can be done?
The report outlines the potential to use "AI to fight AI".
For example, using AI to flag deepfakes, detect phishing emails, trace anomalies in real time, and strengthen threat detection across large datasets.
Here’s a link to the full article for anyone interested to read:
https://cetas.turing.ac.uk/publications/ai-and-serious-online-crime