AI-Generated Voice Deepfakes Pose New Threat to Bank Security
AI-Generated Voice Deepfakes Pose New Threat to Bank Security

AI-Generated Voice Deepfakes Pose New Threat to Bank Security

Scammers are now using AI to create realistic voice deepfakes, aiming to trick people into transferring money. By mimicking real customer voices, this new type of voice fraud attempts to exploit bank security systems and deceive call center agents.

To make sure you're updated about the latest AI trends, look here first.

Increasing prevalence and sophistication of voice frauds

  • A rise in AI-generated voice frauds has been noted this year, with one major case featuring an investor in Florida whose voice was synthetically duplicated to deceive his bank.
  • Voice authentication vendor Nuance detected its first successful deepfake attack on a financial services client late last year.
  • These scams are facilitated by the wide availability of voice samples online, coupled with the growth of AI capabilities and hackers' access to stolen bank account details.

Defending against evolving AI threats

  • Currently, only a small percentage of fraud calls to large financial companies are AI-generated. Most attacks have targeted credit card service call centers.
  • Fraudsters are advancing their techniques, now able to convert speech to a specific target's voice in real-time using advanced AI systems like Microsoft's VALL-E.
  • With most of these security measures focusing on call centers and automated systems, individual calls to high-ranking officials remain a vulnerability.

(source)

P.S. If you like this kind of analysis, I write a free newsletter that keeps you updated with the most relevant news and research in AI. Join professionals from Google, Meta, and OpenAI who are already reading it.

submitted by /u/AIsupercharged
[link] [comments]