Why the “voluntary AI commitments” extracted by the White House are nowhere near enough

Representatives from Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI recently convened at the White House for a meeting with President Biden with the stated mission of “ensuring the responsible development and distribution of artificial intelligence (AI) technologies”.

AI technologies commitments

The climate surrounding the summit

The White House summit was undoubtedly a welcome change of pace for an industry that, over the past 12 months, has gripped the country’s collective consciousness with a heady mixture of excitement, awe, and fear. Sensationalist headlines have warned of potential consequences of AI.

In service of this mission, the committee settled on eight core commitments:

  • To test the security of AI systems before launch, with testing conducted both internally and by an independent third-party experts
  • To share knowledge and best practices about AI risk management with each other and the government
  • To invest in cybersecurity and safeguards against insider threats designed to protect sensitive information related to model weights
  • To make it easy for third parties to detect and report vulnerabilities in AI systems
  • To disclose the capabilities, limitations, and potential applications (both legitimate and malicious) of AI systems, as well as their implications for our society
  • To ensure all AI-generated content can be both readily and decisively identified as such by the public
  • To continue to research the potential societal risks posed by AI and its various applications (e.g., unlawful discrimination, bias)
  • To develop AI technologies designed to address society’s most significant and pressing challenges (e.g., climate change, public health, food scarcity).

What it gets right, what it gets wrong, and why it’s not enough

It would be difficult to find anyone in the field of AI opposed to these commitments (at least not vocally), and that’s for two reasons. Firstly, because they are ethical, sound, and would undoubtedly improve the societal impact of AI. The second reason, however, is because they are sufficiently vague unqualified statements that could be interpreted in about as many ways as one would like.

If the meeting had instead been focused on the defense industry, it would be like Lockheed Martin, Boeing, and Raytheon making a commitment to “ensure advanced missile systems don’t fall into the hands of known terrorist organizations.”

In short, the recent White House accord is a welcome but very small step in the right direction. But these kinds of commitments (legally binding or not) fail to address some very fundamental issues around AI that we, as an industry and society, must address soon, if we hope to minimize this technology’s potential for harm, and maximize its potential for good.

The two most fundamental issues that were not addressed in last month’s White House summit are the following:

  • A formal commitment to the continual monitoring, adaptation, and improvement of AI systems
  • A formal commitment to requiring human oversight as a critical component of all AI systems

The case for continuous improvement: Part 1 – AI technologies are living systems

For most of human history, once a product was manufactured and sold, it was “static”. Your Kitchen Aid mixer you bought in 1975, for example, is still composed of the same pieces and parts and they function in the same way. Furthermore, when baking a cake or making pizza dough, you use the mixer in precisely the same way you always have for the past 50-plus years and get precisely the same outcomes.

AI is not at all like your mixer. AI systems are dynamic, living systems that change, shift, and evolve over time – as does the world around us. Often, these changes can cause AI models or systems to deviate from their original programming or parameters and cause unexpected and/or undesired results. This phenomenon, collectively referred to as “drift”, is a fact of life in the world of AI, and it makes developing and selling AI products a lot more complicated than developing and selling kitchen mixers.

Levity aside, drift can have very real and very serious consequences. Data drift in autonomous vehicles could potentially cost lives. Drift in facial recognition technologies could put innocent people in prison. (And so on.) Without a proper regulatory framework designed to monitor and mitigate this phenomenon, we are sure to see unfortunate consequences.

For these reasons, it’s essential that companies and regulators establish guidelines for the application of continuous improvement and optimization processes — including for things like reinforcement learning, and training data optimization — to ensure their tools are safe and suitably adapted to the world as it is in real time.

The case for continuous improvement: Part 2 — Tireless adversaries

I trust that the executives and the companies they represent at the forefront of today’s AI industry are doing their best to ensure the technology is as safe and beneficial to society as possible.

However, every novel technology – no matter how rigorously tested – will have flaws that can be exploited. It is all but impossible to close every loophole in a system, and unsurprisingly, we’re already seeing this fact of life on full display in the realm of AI.

Despite the technologies being available to the public for just a relatively short period of time, researchers and malicious actors alike have already successfully jailbroken popular generative AI tools like ChatGPT in more ways than one. And each time OpenAI patches one of these exploits, a new clever technique emerges in its wake.

Critically, most of these techniques use nothing more than clever prompt engineering (i.e., process of creating prompts or inputs that are optimized with the goal of eliciting a specific output from the AI) to successfully remove the system’s guardrails. They do not require in-depth knowledge of AI (or even familiarity with coding) to execute. They can be executed using nothing more than natural language.

And given the history of cybersecurity, we can be certain that the volume, variety, and sophistication of these exploits will only grow over time. This is why promising “to test the security of AI systems before launch…” is nowhere near enough to prevent the weaponization of these technologies. Instead, if we hope to minimize their potential for harm, we must ensure every commercially available AI tool is actively monitored and continuously optimized for security vulnerabilities – and perhaps even institute civil or legal penalties for lapses in vigilance.

The need for human insight

Finally, and perhaps most importantly, we must remember that, as powerful and game-changing a technology as AI is, it does have limitations. And if in our current state of overexuberance we fail to remember those limitations, we run the risk of being reminded of them only after they have led to catastrophe.

As the founder of an AI-driven cybersecurity company, I will be the first to extoll the benefits of AI. It is capable of many, extremely valuable things. However, it does not have the power of insight, inference, instinct, or executive decision-making. For these reasons, it remains imperative that in our rush to change the world with this awesome technology, we do not forget our roles as its stewards.

Both developers and end users should never treat AI as a replacement for human intelligence. Instead, it should be seen as a means of augmenting human intelligence; a tool for empowering us to do even more, even greater things. But for that version of the future to come to pass, we must always ensure that the most important, consequential, life-altering decisions (in the boardroom and elsewhere) are still ultimately being made by human beings, both now and well into the future.

Don't miss