What security pros should know about insurance coverage for AI chatbot wiretapping claims
AI-powered chatbots raise profound concerns under federal and state wiretapping and eavesdropping statutes that is being tested by recent litigation, creating greater exposure to the companies and developers that use this technology. Security professionals that integrate AI-chatbots into their business face uncertainty about whether insurance will cover privacy-related claims arising from these technologies.
In this Help Net Security interview, Stephanie Gee, Insurance Recovery Counsel at Reed Smith, discusses the development of these privacy claims as they pertain to AI bots and common coverage issues and solutions for security professionals as they seek to protect against these risks.

Recent lawsuits have started to test how wiretapping and eavesdropping statutes apply to AI chatbots. What makes these cases legally significant, and how do they differ from earlier privacy litigation involving analytics tools or cookies?
There are subtle differences in the way courts are viewing privacy litigation arising from the use of AI chatbots in comparison to litigation involving analytical tools like session reply or cookies. Both claims involve allegations that a third party is intercepting communications without proper consent, often under state wiretapping laws, but the legal arguments and defenses vary because the data being collected is different. For example, session replay technology can record and physically recreate a user’s interactions on a website or app (like clicks, scrolls, and keystrokes).
An issue of focus in litigation becomes whether the recording of such physical interactions constitute, under the relevant statute, a recording of content of a communication. In contrast, AI chatbot can collect or record substantive conversations with a user. An issue of focus then becomes whether the AI chatbot is a party to the conversation, therefore incapable of “intercepting” the communication.
While organizations often have meritorious defenses available for these claims – including that users consent to these types of interactions – early success at the motion to dismiss stage in AI chatbot related privacy suits make these claims significant as it increases the threat the plaintiffs will pursue similar litigation, which could subject organizations costs and fees, including those required to respond to any complaint and to participate in discovery that plaintiffs seek pending any motion to dismiss.
General liability and cyber insurance policies often exclude or narrowly define coverage for “statutory privacy violations.” How do those exclusions play out in the context of AI-driven chatbot litigation?
Whether or not an exclusion will ultimately impact coverage depends both on the specific language of the exclusion and also the allegations raised in the underlying lawsuit. For example, broadly worded exclusions with “catch-all” phrases precluding coverage for any statutory violation may be more difficult for policyholder to overcome than an exclusion that identifies by name specific statutes.
As these claims are relatively new, we have yet to see significant examples of how this plays out in the context of insurance coverage litigation. However, we saw similar coverage arguments in the context of insurance coverage litigation where the underlying suit alleged violations of the Biometric Information Privacy Act (BIPA). In some instances, courts refused to apply statutory exclusions where BIPA was not specifically identified. Additionally, the underlying lawsuit itself may contain other causes of action – like negligence for example – that relate to conduct separate from a statutory violation that may implicate coverage regardless of an exclusion for statutory violations.
If a company faces a privacy class action tied to its chatbot’s use of user data, what are the most common coverage pitfalls you see when policyholders turn to their insurers for defense or indemnity?
Apart from specific statutory exclusions addressed from above, one pitfall policyholders fact is general uncertainty as to whether a policy covers AI risks. Oftentimes the policyholder is relying on “silent” coverage for claims generally relating to AI, where the policy does not expressly cover or expressly exclude AI risks. Silent coverage may lead to ambiguity and opens the door for insurers to deny the claim and argue these “novel” type risks were never intended to be covered. Disputes with insurers over this “silent coverage” can be both lengthy and costly.
What practical steps can organizations take now, in their policy reviews, vendor contracts, or chatbot configurations, to minimize both legal and insurance coverage risk?
To help mitigate risks, organizations should review their user consent mechanisms for AI Bot Communications. Consent does not always mean signing a form, but could include prominently displaying chatbot privacy notices before any data collection, providing easy access to the business’s privacy policy detailing how chatbot interactions are stored, and using automated disclaimers at the start of each chat session. Organizations should consider that state laws may set different standards for acceptable notice and consent.
Organizations should also carefully review and consider their contract terms with any third-party AI tool providers. For example, organization may be more likely to successfully argue that the AI chatbot is an extension of the organization (and not a third party listener) if the AI tool provider is not allowed to use the data it receives for its own benefit.
As regulators and courts clarify the legal boundaries around chatbot data collection, what role do you see insurance recovery counsel playing in helping clients bridge the gap between policy language and technology?
Coverage counsel can assist organizations, even prior to any litigation being filed, by evaluating an organizations current coverage program and advising whether the organization is adequately covered from emerging AI risks. This can include a review of all lines of coverage for exclusions tied to statutory violations, eavesdropping, wiretapping or intentional conduct and offering ways to narrow exclusions where possible. Coverage counsel may also identify potential areas where insurers may be open to negotiating policy endorsements that expressly cover AI-related communications.
Once litigation has been filed, coverage counsel can assist policyholders in determining which insurers should be notified and can assist the policyholder in navigating notice requirements, which may differ across all lines of coverage. Coverage counsel can also help policyholders evaluate their best path to coverage and communicate with or respond to insurers requests for information to resolve potential coverage disputes without the need for litigation.