Why legal must lead on AI governance before it’s too late

In this Help Net Security interview, Brooke Johnson, Chief Legal Counsel and SVP of HR and Security, Ivanti, explores the legal responsibilities in AI governance, highlighting how cross-functional collaboration enables safe, ethical AI use while mitigating risk and ensuring compliance.

legal AI

From a legal and governance perspective, what are the biggest risks of unmanaged AI use?

The core risks lie at the intersection of technology, ethics and law. GenAI tools, while powerful, introduce challenges in areas such as data privacy, bias, security, and ethics. Take recruiting, for instance. If a GenAI-powered hiring tool makes decisions based on flawed or biased training data, the risk of discriminatory outcomes is both serious and likely. Without transparency into how these tools operate, companies risk liability for outcomes they may not fully understand.

Subsequently, addressing these risks cannot solely be left to IT teams. Legal must be at the forefront of AI governance strategy to shape responsible integration from the outset.

How should legal teams work with HR, IT, and security to create clear, enforceable guardrails that don’t stifle innovation?

Successful AI governance is about enabling responsible innovation. A cross-functional, organisation-wide approach to risk is key to achieving this. At Ivanti, we’ve seen that when traditionally siloed departments, such as legal, HR, IT, and security, come together around shared objectives, the outcome is more than just compliance; it’s strategic acceleration.

Creating this kind of alignment requires intentional design. Begin with defining shared objectives that transcend departmental boundaries. Select team members for both functional expertise and their ability to think beyond their remit. Develop metrics that measure collective outcomes, not just individual activity.

Critically, this approach demands leadership with a broad business mindset: professionals who view compliance and risk not as blockers but as enablers of progress. With this in place, innovation can scale safely.

What’s the role of legal in preventing misuse or overreliance on AI in technical teams, especially with tools like GitHub Copilot, ChatGPT, or code-generating systems?

It’s tempting to simply ban unauthorised AI use. However, that is short-sighted – especially when AI adoption is surging. Our research shows that 74% of IT workers are now using Gen AI tools and yet, alarmingly, nearly one-third do so without telling management.

Instead of chasing rule-breakers, legal teams should lead the shift towards governed enablement. Assume AI is already in use and proactively define where and how it can be done safely. At Ivanti, our cross-functional AI Governance Council (AIGC) supports employees navigate these grey areas by reviewing tools and assessing guidance to clarify acceptable use cases.

Education is also critical. Employees are empowered to do their jobs efficiently and safely when legal teams lead in providing clear and practical training on the security implications of different AI tools and explaining why restrictions exist.

How can companies ensure that AI policies are not just written, but operationalised across departments and geographies?

AI governance only works if it’s actionable. First, acknowledge that AI use is likely taking place across your organisation, whether sanctioned or not. Conduct assessments to understand what tools are being used and which ones meet your standards.

Then, create clear, pragmatic policies on when and how AI can be applied. Equip teams with vetted platforms that are easy to access and secure, reducing reliance on unsanctioned alternatives.

Training remains essential. When people understand the rationale behind AI guardrails, not just the restrictions, they are far more likely to follow them.

Finally, treat AI governance as a living process. Tools and threats evolve, and your policies should too. Cross-functional collaboration ensures continuous refinement and operational consistency across departments and geographical borders.

What’s your view on how proactive companies should be in shaping internal AI governance ahead of regulation, rather than waiting to react?

Being proactive in establishing internal AI governance is no longer optional – it’s a business imperative. The legal and ethical risks of unchecked AI use are too significant to wait for regulation to catch up. Responsible governance must be embedded from the start.

A cornerstone of Ivanti’s approach is ensuring AI systems are explainable. This means going beyond surface-level assurances, asking how AI models are built, what data they’re trained on, and how errors and biases are mitigated. Our legal team also plays a central role in selecting vendors, scrutinising both technical capabilities and the ethical foundations.

This isn’t just about compliance, it’s about building trust. As a global company, we align not only with US laws, but also with evolving international frameworks like the EU AI Act. These help us build AI that is robust, scalable, and future proof.

AI is here to stay. Companies can reduce risk and unlock AI’s full potential responsibly by leading with governance and empowering employees, while also ensuring innovation occurs within ethical boundaries.

Don't miss