As vishing becomes more frequently used amongst threat actors, researchers have discovered that AI-generated voice clones from as little as five minutes of recorded audio are well on the rise.
NCC Group's research team has explored how voice impersonation using AI allows for classic social engineering attacks to become even more refined, blurring the lines of what is real and what is simulated. This could put enterprises, their employees, and everyday individuals at increased risk of voice phishing or vishing attacks from bad actors trying to gain access to their personal information, financial accounts, sensitive corporate data, and more.
NCC Group's report includes a clip of a voice clone that researchers recorded in real-time, though the company declined to publish the technical details of their exploits to prevent attackers from making similar voice clones.
“That said, it should be expected some threat actors have already developed these techniques themselves,” NCC Group's Pablo Alobera, managing security consultant; Víctor Lasa, security consultant; and Mark Frost, principal security consultant, wrote in the report...
