Dive into our conversation with Sam Peters, Chief Product Officer at ISMS.online, as he discusses the financial impacts businesses face from deepfake attacks. From direct financial losses to reputational damage and regulatory fines, learn how companies are navigating this emerging threat landscape:
Can you elaborate on the specific financial impacts businesses face when targeted by deepfake attacks?
There are many ways a deepfake attack could impact a business financially; it's not just the loss of money to cybercriminals - although that is often a considerable sum - it's the broader financial impacts that companies usually don't think about or consider until they find themselves in that situation such as business operation costs, reputational costs, legal costs and regulatory fines.
The most significant impact of deepfake attacks on businesses to date is financial transactions initiated as a result of the deepfake audio or video approach. A notable instance involved a finance worker at a multinational firm being tricked into paying $25 million to fraudsters who used deepfake technology to pose as the company's chief financial officer in a video conference call. This type of deepfake attack often results in a very immediate financial loss through the financial transfer that is executed as a result.
An often unexpected financial impact comes as a result of the erosion of customer trust due to deepfake-induced misinformation. The negative publicity from such attacks can adversely affect stock prices, leading to considerable financial setbacks for shareholders. For example, deepfake news or videos about a company's financial health can lead to stock manipulation, with severe consequences for market valuation and investor confidence.
Addressing deepfake attacks necessitates substantial resources for crisis management, including forensic analysis, legal consultations, and public relations efforts. These activities divert attention from regular business operations, thereby reducing overall productivity.
Businesses may also face expensive litigation and potential regulatory fines if deepfake attacks involve data breaches or privacy violations. The legal expenses and regulatory penalties can further strain financial resources.
In the wake of a deepfake attack, companies often need to invest heavily in advanced cybersecurity measures to prevent future incidents. Cyber insurance premiums may also rise, increasing ongoing operational costs.
Businesses that want to improve their resilience to such attack types must take a multifaceted approach by investing in regular employee training and robust policies and enhancing their cybersecurity infrastructure through specific and targeted technical controls.
According to ISMS.online's recent State of Information Security Report, 30% of organisations have experienced AI-powered deepfake attacks in the last 12 Months. Can you share some real-world examples or case studies to highlight the severity of these attacks?
The stat from our State of Information Security Report initially was shocking, but then when you look at a few real-world examples that have made the press, it drives home the very real severity and diverse nature of deepfake threats:
Whilst the sophistication has come on considerably, deepfakes are not new. Fraudsters used deepfake audio to impersonate the CEO of a UK-based energy firm, tricking an employee into transferring $243,000 to a fraudulent account as far back as 2019.
More recently, the head of the world's biggest advertising group was the target of an elaborate deepfake scam that involved an artificial intelligence voice clone. The unsuccessful scam targeted an agency leader, asking them to set up a new business to solicit money and personal details.
In this case, the fraudsters used a publicly available image of the CEO to create a fake WhatsApp account and set up a Microsoft Teams meeting with him and another senior executive. According to press reports, the impostors deployed a voice clone of the CEO and YouTube footage of him during the meeting. They impersonated him off-camera using the meeting's chat window. And, whilst this attack was unsuccessful, it highlights just how convincing and elaborate deepfake scams can be.
Perhaps a more common occurrence is deepfake technology used to create videos of well-known celebrities endorsing fraudulent business schemes that lead to people investing or handing money to scammers. Martin Lewis, perhaps better known as 'The Money Saving Expert', had to engage in extensive legal action and public relations campaigns to clear his name after a deepfake video of him appeared online endorsing a cryptocurrency scheme. While this may not sound like a business issue, your reputation and potential legal or regulatory implications are at risk if your business is listed as part of such a scam.
And just this week, we've seen reports of a company conducting high-level job interviews falling victim to a deepfake attack where a candidate used deepfake technology to obtain the job and a VPN to manipulate their location and, once hired, attempted to use their company-delivered laptop to load malware and other execute other malicious activities in an alleged 'nation-state' backed campaign.
All of these instances serve to highlight the very real risk to businesses from deepfake-based attacks.
Could you elaborate on the necessity of a multifaceted approach to combat deepfake threats? This could include the importance of technological solutions, robust governance frameworks, and comprehensive employee training.
There is no one solution to combatting the risk of deepfakes to your business; it requires a thoughtful mix of approaches that consider technological solutions, governance frameworks, and employee training to establish a robust baseline defence against such threats.
As with most cyber risks, looking at the tools and technologies you can leverage is an obvious first line of defence. Things like advanced detection tools that can analyse audio and video for inconsistencies and real-time monitoring systems to detect threats as they occur can help identify issues before they get too far.
From our perspective, establishing clear policies and procedures for handling deepfake incidents or, indeed, any cyber incident is essential. This should include guidelines for verifying communications, responding to threats, ensuring compliance with relevant regulations and developing comprehensive incident response plans.
Another essential aspect of ensuring your cyber resilience and defence against deepfake attacks is delivering regular training programs to educate your employees about deepfake risks and how to recognise them. Deepfakes primarily rely on exploiting the human element within organisations, deceiving individuals, making it essential for leaders to prioritise employee cybersecurity awareness and training programs.
However, more than technology and employee awareness is required. Leaders must take a systematic approach to identifying and treating areas of vulnerability across their entire business. This is where adhering to standards, such as ISO 27001, becomes invaluable. By aligning with the ISO 27001 standard, organisations can establish a robust security posture encompassing technological controls, governance measures, and regular assessments. This addresses technical vulnerabilities and reinforces the importance of organisational policies and employee training.
How can developing ethical AI guidelines and regularly auditing AI models help mitigate the risks associated with deepfakes?
Establishing clear principles for the ethical use of AI ensures that organisations are accountable for their AI applications and, in doing so, places some responsibility onto the AI provider and organisations leveraging AI within their operations to ensure relevant checks and measures are in place to limit the potential misuse of AI technologies, such as creating malicious deepfakes.
Additionally, ethical AI guidelines can establish standards for verifying the authenticity of digital content, helping develop AI systems that could detect and flag potential deepfakes and protect the integrity of communications.
Regularly auditing AI models can also help ensure reliability and effectiveness in proactively identifying vulnerabilities within the technology that could be exploited for malicious purposes, such as creating deepfakes, allowing organisations to strengthen their defences against such threats.
By evaluating and improving the accuracy and robustness of AI tools, audits enhance detection capabilities and provide insights into the behaviour and performance, ensuring they function as intended. Regular audits also support regulatory compliance, reducing the risk of legal repercussions associated with unacceptable AI usage, such as deepfake incidents.
The newly released ISO 42001 standard, which provides organisations with a framework for establishing, implementing, maintaining and continually improving an artificial intelligence management system (AIMS), will be crucial. Data protection and AI security are core components of the standard, helping compliant businesses safeguard AI systems against threats. It will also allow organisations to comply with specific regulations like the European Union AI Act, which has now been adopted, and relevant organisations must comply with on a phased timeline up to early 2026.
ISO 42001 also aligns with the existing ISO 27001 information security management standard. Therefore, businesses that comply with both standards will benefit from a more robust, integrated, comprehensive security posture.
Why is it essential for businesses to implement standards for information security management to defend against deepfake threats?
Businesses that leverage information security management standards, such as ISO 27001, can take a more structured and comprehensive approach to managing their information security risks. ISO 27001 sets out a framework for organisations to systematically identify, assess, and mitigate cybersecurity and information security risks facing their businesses and develop robust security policies and procedures to handle them effectively.
The additional benefit of using established standards such as ISO 27001 is that it promotes organisational accountability and transparency. Businesses with clearly defined roles and responsibilities that ensure all employees understand their part in maintaining security and responding to incidents are less likely to suffer business-ending incidents. The structured approach fosters a culture of security awareness and vigilance, which is vital for early detection and effective response to incidents like deepfakes.
Regular audits and reviews mandated by ISO 27001 also ensure that information security controls remain effective and up-to-date. Continuous monitoring and improving security measures help organisations stay ahead of emerging technologies and adapt to new threats. This proactive approach is, therefore, essential for maintaining the integrity and reliability of AI-powered detection tools, for example, to identify and mitigate deepfake content.