Legal gaps in AI are a business risk, not just a compliance issue
A new report from Zendesk outlines a growing problem for companies rolling out AI tools: many aren’t ready to manage the risks. The AI Trust Report 2025 finds that while AI is moving into customer service and support, only 23% of companies feel highly prepared to govern it.
The report highlights concerns ranging from data privacy to model bias. But the core challenge is trust: when customers don’t understand or feel comfortable with how AI is used, they’re less likely to engage. And when companies don’t have frameworks in place, they expose themselves to legal, reputational, and operational fallout.
Compliance isn’t keeping up
One of the biggest concerns for legal teams is the fragmented nature of AI regulation. While the EU’s AI Act has taken center stage globally, many countries and U.S. states are rolling out their own frameworks. That means businesses need to comply with multiple, sometimes conflicting, sets of rules.
According to the report, only 20 percent of companies have a mature governance strategy for generative AI. That leaves most firms scrambling to build processes for consent, data handling, model oversight, and explainability, often after the tools are already in use.
For CISOs and CLOs, this late-stage involvement can be a problem. Legal reviews may come too late to shape system design or vendor choices, increasing the chances of a regulatory misstep.
Shana Simmons, Chief Legal Officer, Zendesk, told Help Net Security: “Our AI governance is built around core principles that apply across legal jurisdictions—like privacy and security by design, transparency and explainability, and customer control. We embed AI-specific governance steps directly into our product development process to ensure that risks are identified and mitigated, while minimizing bottlenecks for the majority of our AI features, which present limited risk.”
AI introduces new types of risk
Researchers outline several AI-specific threats that legal teams and CISOs must understand. These include:
- Jailbreaking, where users try to get AI tools to say or do something they shouldn’t
- Prompt injection, where attackers manipulate AI behavior through input
- Hallucinations, where the AI generates incorrect or fabricated information
- Data leakage, where sensitive information ends up in AI outputs
These risks go beyond typical IT threats. For example, if an AI model gives customers wrong answers or leaks personal information, the business could face both legal claims and reputational harm. And if that AI behavior cannot be explained or audited, defending those decisions becomes much harder.
Customers expect oversight
Customers are paying attention. Zendesk cites research showing that customers want to feel “respected, protected, and understood” when they interact with AI. That means companies must go beyond simple disclaimers or checkboxes.
Customers now expect to know when AI is involved, how it works, and what control they have over their data. If those expectations are not met, companies could see increased churn, customer complaints, or even class-action lawsuits—especially in regulated industries like healthcare or finance.
For legal teams, that raises new questions about product design, vendor contracts, and internal accountability. Who owns the risk when AI goes wrong? What happens if an agent relies on a flawed AI recommendation? They are business questions that CLOs and CISOs need to answer together.
What legal leaders can do now
Companies that treat AI governance as an afterthought are putting themselves at risk. For legal teams, the response needs to be proactive, not reactive. That means working closely with CISOs to:
- Audit current AI deployments for gaps in transparency, fairness, or consent
- Build flexible compliance frameworks that can adapt as laws evolve
- Ensure vendors are contractually bound to governance standards
- Participate early in AI product planning, not just final reviews
Most importantly, it means helping the business set guardrails. If a customer sues over an AI decision, the company should be able to show how that decision was made, who reviewed it, and what safeguards were in place.