
By Michael Loney
Feb 19 - (The Insurer) – As artificial intelligence-related claims start seeping beyond media liability, insurers are grappling with how to underwrite AI, with a tendency against exclusion despite some concerns.
The majority of litigation around AI technology has centred around copyright infringement, which falls into the media liability basket, or AI washing claims, which are out of the purview of cyber insurers.
In a blog post on AI and insurance published on February 13, retail broker Woodruff Sawyer noted that cyber and technology E&O insurance policies are perhaps the most directly impacted by AI.
“As AI enhances the capabilities of cybercriminals, the threat landscape becomes more perilous. AI can enable more sophisticated phishing attacks and ransomware exploits, necessitating a strong cybersecurity strategy,” it said.
Woodruff continued: “On the flip side, AI is also a critical tool for enhancing cybersecurity defenses. Companies must stay ahead by integrating AI tools to detect and mitigate cyber threats effectively. Insurers are increasingly scrutinising how businesses use AI in their cybersecurity strategies during the renewal process.”
The broker suggested that a bigger challenge is when AI is the product, or controls a fundamental aspect of the product, and an incident causes financial harm rather than physical damage.
“This can lead to claims under the technology E&O component of a cyber policy. Cyber insurers are not as confident about their ability to understand and predict these E&O claims, so AI-focused companies are experiencing higher premiums and more limited cyber/E&O coverage options due to the lack of competition,” the blog post said.
Insurers and cyber experts at the NetDiligence Cyber Risk Summit in Miami Beach in February said they still viewed the technology as nascent and expressed uncertainty around investment in generative AI. They were also wary of a future where AI exclusions in their cyber policies or AI-specific products in their catalogs might be seen as essential in the near future.
As AI-related claims start seeping over beyond media liability, some at the saw a possibility of that changing.
Speaking at the NetDiligence summit, Andrew Podgorny, vice president of cyber at Relm Insurance, noted his company released its first trio of AI insurance products in January as a precaution and distinguishing factor more than anything else.
The products – Novaai, Pontaai, and Rescaai – aim to indemnify U.S. and EU regulatory risks, cybersecurity implications, and privacy issues that may arise from both.
"Look at what [carriers] do, not what they say," Podgorny said, citing the war-exclusion clause as an example of cyber providers slipping in the exclusion despite maintaining that they held no coverage obligations in case of cyber threats arising from wars.
"Clearly, they were worried,” he said.
It is unclear whether AI risk is as potent; Podgorny agreed that the claims are so far trending within media liability.
NO 'KNEE-JERK' RESPONSE YET
But insurers and lawyers have been having more conversations with their clients about third-party liability and data privacy claims that may arise from putting privileged information into AI products like ChatGPT or Copilot – and as states begin to roll out AI-specific laws, those claims are likely to start showing up, said Jennifer Beckage, founder and data security attorney at the Beckage Firm.
While insurers have kept an eye on these developments, they aren't yet alarmed enough to exclude AI or differentiate it from other tech in their cyber policies.
Alexandra Bretschneider, vice president and cyber practice leader at Johnson Kendall and Johnson, said she was relieved to see that the insurance industry has not had the same "knee-jerk" response as the media.
"I'm very happy to see that I'm not seeing these AI exclusions popping up left and right; but I do think we will start to underwrite it," she said.
She said that, instead of excluding AI, carriers should use AI-specific language that clarifies an affirmation of coverage.
Some carriers may start developing AI-specific products or incorporate exclusions, which may come with their own set of complications, she noted.
"Remember, cyber has first- and third-party coverages ... will this AI product be just one-sided for your own damages, or mirror what you see in a cyber policy, in terms of coverage for both damages for yourself and others?" Bretschneider asked.
"Then will that include bodily injury, property damage, et cetera? it starts to get all over the map, which is why I don't want exclusions. It would cost too many disconnected pieces of the puzzle."
For some, the best way to tackle the legal and privacy risks that come with AI use at this stage, is to avoid ‘othering’ it from mainstream technologies.
Beckage stressed that regulators like the SEC and FTC are poised to "come knocking" to review data-sharing documentation between companies and their vendors – but she said that data hygiene should already be a part of a business compliance plan. For her, insurers can better interrogate their policyholders before renewals if they have made AI disclosures.
"Let's not overthink it," Beckage said.
"It's just another technology that we're using that we have to review and monitor.” She used the example of a call centre: “At some point annually, we're reviewing our employees during an HR review. Sometimes we're listening in on the phone to hear what they're saying and that they're following the script. It's the same for any AI tool and the training data during an incident."
BUYERS MUST CONSIDER HOW THEY ARE USING AI
During a hot topics session at the NetDiligence summit, Stephanie Snyder Frenier, senior vice president of cyber liability practice and national director for cyber advantage at Arthur J Gallagher, said that brokers are trying to help their clients think about how they are using AI.
“Do they (the customers) have a policy on AI and are they educating their employees on how AI should and should not be used?” she asked.
“I encourage people to take advantage of AI, because it does make things a lot easier. But at the same time, we need to be mindful of the information that we're putting into AI from a privacy standpoint, because generative AI continues to learn from the information that we're entering into it.”
This means that organisations need to be thinking about how they are educating their employees, and then ensuring that if they are using their own AI that it is being used in a way that it is firewalled off.
“They also need to be aware of the data that they're feeding into the AI. Is it copyrighted? Is there personal information in that data set? Are there regulations around the personal information that may be in that data set? Is it EU-related data? Is it U.S. data? How do all of these different privacy laws work when it comes to the data that's being fed into the model?” she said.
“There's so many considerations that you really need to think about as we talk about AI.”
Snyder Frenier also raised the issue of “silent AI” in various lines.
“If you have these AI models and you're relying on them to make decisions, are you creating an employment practices liability situation because there's bias inherent in your AI that you're using to make decisions?” she posed.
“I think you could talk about silent AI examples in pretty much every policy that's out there, depending on how you're using that large language model to advise you to make decisions for your organisation. It creates a lot of potential liability.”
Like other speakers at the NetDiligence event, Snyder Frenier also said that media liability is where claims related to AI have been seen so far.
“There are allegations of copyright infringement. If that model is ingesting copyrighted material without permission, you have a media liability, so we're seeing a lot of those,” she said.
Snyder Frenier continued that tech E&O claims have not been seen so far but could arise in situations where someone is paying for an AI model and not getting what they pay for.
Bob Parisi, head of cyber solutions for North America at Munich Re, commented that carriers are asking questions about AI in the underwriting process, just as they do about other potential liabilities.
“You’re asking them, how do you use AI or do you use AI?” he said.
Parisi said that if companies give a binary ‘yes’ or ‘no’ answer but do not provide other details, this could be a red flag. But he noted that the market has seen companies absorb all kinds of technology previously, such as cloud coverage, and that AI is similar.
“There's a process and a model, and if they're doing that, you at least have some comfort that they are doing things in as good a way as they can,” he said.
“And then you can dig down and see what they are doing. Are they doing it for client facing? Is it internal? Is the CISO using it to monitor law?”
Snyder Frenier responded: “I would add that as a broker, I don't see enough underwriters asking about AI. Underwriters are very focused on ransomware controls, and certainly I think a lot of focus right now on wrongful collection.
“But AI for a lot of organisations is aspirational. I think it is something that, collectively, as underwriters and brokers we need to make sure that clients have an eye on it, because it is kind of the future of risk.”