Generative AI and ‘Deepfake’ Technology – Implications for Insurance
You would be hard pressed to find someone who has not heard the term Artificial Intelligence (AI). AI is defined as the ability of a computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.
Whilst the concept of AI is by no means new, it is the relatively recent rise of so-called Generative AI that has catapulted the topic into the mainstream and cemented its place as a disruptive technology potentially on the same scale as the internet.
Arguably the most well-known Generative AI tool, and the one that kick-started the ‘AI boom’, is ChatGPT. Launched in 2022, it is a free tool that uses deep learning to engage in human-like conversations with users based on prompts.
Another exciting and, perhaps in equal measure, terrifying AI development is ‘Deepfake’ technology. This uses media – such as photographs, recorded or live voice and video – that has been digitally manipulated to replace one person’s likeness convincingly with that of another.
As far as Social Engineering fraud is concerned, which typically involves the compromise of an Insured’s or their supplier’s email accounts, the addition of Deepfake technology to fraudsters’ ever-evolving arsenal of tools is a cause for concern.
Bad actors are using this technology to bolster the appearance of legitimacy of fraudulent payment requests. As the technology becomes ever more sophisticated and realistic, so does the likelihood of potential victims being duped by it.
Case Example:
We recently adjusted a claim for a multinational business, which involved a Social Engineering fraud that targeted one of their employees working in the finance team. Fraudsters, impersonating the company’s newly appointed CEO, approached the employee via a spoofed email account and requested the transfer of funds, which purported to relate to a highly confidential business deal. The fraudsters manipulated the employee by promising a promotion and a significant salary increase in return for their cooperation.
The employee had concerns and requested a video call. The fraudsters sent a recorded video message, which was created from a genuine video of the CEO that had been made for a previous presentation. The video message was a convincingly doctored Deepfake version of that video with new audio which mentioned the employee by name.
Taken in by the video, the employee paid away funds totalling over USD 2 million to various accounts in different countries controlled by the fraudsters.
Analysis:
As is evident from our case example, Deepfake technology has already developed to the extent that, increasingly, the untrained eye cannot detect that media has been doctored. We anticipate that this rapid development of the technology, along with its widespread availability, will inevitably contribute to an increase in successful Social Engineering frauds over the coming years and, in turn, more claims being notified.
It is worth considering what the potential implications could be for the insurance industry.
Will we see specific policy exclusions for frauds where the cause was deception via Deepfake technology?
Could the relevant risks attract a higher deductible, a higher premium, or eventually become uninsurable?
Will sub-limits be applied? And, how will the crime and cyber policies interact?
There is no doubt that insureds should be, and in many cases are, taking the threat of Social Engineering (and other cyber) fraud seriously. Preventative measures are paramount, including employee training on how to respond to requests received via video or voice. From our work with financial institutions, we are seeing that banks are developing AI-powered screening tools, including those that utilise behavioural analysis.
Needless to say, detailed and accurate proposal information in respect of procedures and controls will be key for insurers, and it will be incumbent on insureds to provide that. We have already seen that proposal forms have been developing, with many now referring to specific IT controls and the verification of payment details.
Please do contact us for any further information.