The vast majority of organizations are encountering AI-augmented threats, but remain confident in their defenses, despite inadequate detection investment and more than half falling to successful attacks.
AI-augmented deepfakes are becoming more and more common in cyberattacks on businesses and government agencies, and most organizations are aware of the danger. However, there's a preparation paradox at work: most lag behind in investing in technical solutions for defending against deepfakes, experts say — even as they feel that they're ready for the onslaught.
On Oct. 7, AI giant OpenAI published research showing that a growing number of criminal and nation-state groups are using large language models (LLMs) to improve their attack workflows and create better phishing lures and malware. A second report, published by email security firm Ironscales on Oct. 9, found that these approaches seem to be working: Overall, the vast majority of midsized firms (85%) have seen attempts at deepfake and AI-voice fraud, and more than half (55%) suffered financial losses from such attacks, according to the survey-based report.
Most companies are taking the threat seriously, but are nonetheless struggling to keep up, says Eyal Benishti, CEO of Ironscales...
