AI: Interpreting regulation and implementing good practice

Businesses have been using artificial intelligence for years, and while machine learning (ML) models have often been taken from open-source repositories and built into business-specific systems, model provenance and assurance have not always necessarily been documented nor built into company policy. Standards and guidance being developed are rightly aiming to be risk-focused and flexible. Still, implementing these in the businesses that create and consume AI-enabled products and services will make a difference.

AI regulation

However, there are many open questions regarding the practicalities of assurance (what does good look like? Who is qualified to perform assessments?), liability (the software supply chain is complex), and even whether continuous development of this new technology is responsible.

Who is liable for model failure regarding the governance of these systems?

When considering third-party risk, it’s worth considering what templates the ML models have been built on and then understanding the journey of a product using these models. For example, how many ML models have been spawned from previous versions? If a vulnerable ML model has been used from the beginning, then it’s fair to say that it will be present in every subsequent version.

To succeed, a degree of trust must exist to allow AI-enabled services to achieve their designed ambition. Greater connectivity will enable these systems to interact with others to deliver products and services to customers. However, this interconnected web of trust means that unless appropriately managed, when one part is compromised, without common security standards for AI-enabled services, it will be difficult to govern.

Ensuring consistency between the regulation, standards and guidance being created

Emerging standards, guidance and regulation for AI are being created worldwide, and it will be important to align this and create a common understanding for producers and consumers. Organizations such as ETSI, ENISA, ISO and NIST are creating helpful cross-referenced frameworks for us to follow, and regional regulators, such as the EU, are considering how to penalize bad practices.

In addition to being consistent, however, the principles of regulation should be flexible, both to cater for the speed of technological development and to enable businesses to apply appropriate requirements to their capabilities and risk profile.

An experimental mindset, as demonstrated by the Singapore Land Transport Authority’s testing of autonomous vehicles, can allow academia, industry and regulators to develop appropriate measures. These fields need to come together now to explore AI systems’ safe use and development. Cooperation, rather than competition, will enable safer use of this technology more quickly.

A clear explanation of what controls might be needed and why can build trust in new and complex technologies and encourage businesses to act. It is important that, when it comes to AI technology, risk assessments are based on use cases and outcomes rather than the software alone. These systems will evolve, so they must be understood in context.

What might be appropriate guidance for one company is not necessarily suitable for another. There needs to be a risk and business resilience mindset rather than compliance alone.

Implementing good practice

As we’ve seen with cyber security, it is easy to set out best practice guidelines and warn of cyber threats; it is harder to enable secure processes and use threat intelligence to improve defenses. Additionally, AI doesn’t have dedicated teams to hold responsibility yet, and investment for such a thing must be balanced with customer value. There is also a need to build the right mix of AI capability and understanding within businesses, and there are even calls for AI professionalization.

So where to start? Work out what you have and understand the use cases.

Regardless of what regulation is coming, it is worthwhile for every business to understand how the risk is being evaluated, the current exposure level, and how standards and regulation will affect the company.

Not just the risk of accuracy but responsible AI risk

We should consider where AI risk sits in the context of cybersecurity risk and the broader enterprise risk management (ERM) framework. The question of where the responsibility for AI risk should sit within an organization should be: the security team or the board?

It would seem sensible that AI risk should be embedded into innovation and planning – we can address threat modelling now and even strive for a future that includes “ML-Sec-Ops” – as the lifecycle requirements of these systems will also need to be monitored and managed.
We’ve seen how the pace of technology innovation can leave security teams struggling to keep up, and they are often an afterthought or part of the end of the development cycle, which is just not scalable.

A much better way is for developers, whilst considering a user journey for their new application or product, to consider the malicious user journey and that what they’re building could be abused. The same could be applied to AI-enabled services.

Don't miss