Los Alamos researchers warn AI may upend national security

For decades, the United States has built its defense posture around predictable timelines for technological progress. That assumption no longer holds, according to researchers at Los Alamos National Laboratory. Their paper argues that AI is advancing so quickly that the current defense system cannot adapt in time.

AI national security threats

The authors warn that the United States risks strategic surprise if it continues to rely on programs designed in an era before capable machine intelligence. They compare the coming disruption to the early nuclear age, when atomic weapons forced the creation of an entirely new security ecosystem.

AI is speeding up scientific change

The report notes that defense planning has long depended on synchronizing two clocks: one for defense development and one for scientific progress. AI is now breaking that synchronization.

The authors cite several examples that show how progress has accelerated. AI-driven weather models achieved performance gains that would have taken two decades using hardware advances alone. In materials discovery, AI helped uncover nearly 400,000 new stable compounds, a scale of progress that would have taken more than a century before.

These jumps suggest that analysts can’t rely on historical rates of improvement when forecasting technological change. Systems built for a 30- or 50-year service life may lose relevance in a fraction of that time.

The economics of discovery are shifting

Another major finding is that AI is changing how strategic advantage is pursued. Traditional breakthroughs depend on coordinating teams of experts across disciplines. AI systems can now test thousands of ideas in minutes at a fraction of the cost.

For example, developing a new cryptographic approach might take a team of researchers a year and millions of dollars. A network of AI agents could attempt tens of thousands of algorithmic variations for only a few hundred dollars. This makes it rational to pursue scientific “strip-mining” at a massive scale, with agentic systems exploring far more possibilities than humans could.

The authors argue that the future balance of power may depend less on human expertise and more on access to compute power and energy. Nations that can run large AI systems will have the advantage in both innovation and threat creation.

AI is lowering the barrier to create dangerous capabilities

AI also appears to be democratizing the ability to generate potent threats. Tasks that once required significant resources can now be performed by small groups or individuals. The report points to examples such as AI-generated persuasive campaigns that can change opinions about 15 percent of the time, early signs of AI-assisted bioengineering, and realistic synthetic media.

These developments have not yet enabled individuals to threaten national survival, but the authors caution that this may change as AI models grow more capable. The availability of open models and limited ability to restrict access make it difficult to prevent misuse.

Rogue AI could emerge as a new class of threat

The paper raises a more unsettling scenario: a powerful AI system acting independently of human control. Recent studies show that some large models can already deceive overseers, replicate themselves, and evade monitoring. If those capabilities combine, a self-directed AI could cause large-scale harm through cyberattacks or bioengineered agents.

Such a system would not fit within existing deterrence frameworks, which assume an adversary that can perceive risk and respond to threats. A rogue AI would lack those incentives. The authors suggest that national security institutions need to plan for this possibility now, including developing monitoring, containment, and response capabilities.

What the authors recommend

The Los Alamos researchers call for several steps to adapt to this new landscape. First, they say the United States should launch a large-scale effort to study how AI can create threats, rather than only trying to defend against them. Understanding the offensive potential is the only way to anticipate what adversaries might discover.

Second, they urge deeper partnerships between the national security community and AI companies to gain access to frontier models. At present, policy and procurement barriers limit that collaboration, while China has moved toward close coordination between its military and AI industry.

Third, they argue that traditional deterrence strategies may fail in several ways. AI could undermine attribution, accelerate escalation, or make certain weapons too fast to respond to. In some cases, the authors note, deterrence may remain viable; in others, it may not make sense at all.

The paper proposes that national laboratories establish an “AI Factory for Defense Science.” This facility would use large-scale computing to model threats and develop AI-enabled defensive capabilities. Without such a system, the report warns, the United States will struggle to understand or counter the scale of AI-driven change.

Webinar: Redefining attack simulation through AI

Don't miss