Prompt injection tags along as GenAI enters daily government use

Routine use of GenAI has moved into daily operations in state and territorial government environments, placing new security risks within common workflows.

GenAI prompt injection risk

A Center for Internet Security (CIS) report, Prompt Injections: The Inherent Threat to Generative AI, identifies prompt injection as a persistent concern tied to that adoption.

Adoption expands exposure

Use of AI tools has increased in government IT teams. A 2025 NASCIO survey of 51 state and territorial CIOs found that 82% reported employees using GenAI in daily work, up from 53% the prior year.

Most organizations have moved beyond early testing, with widespread pilot programs, proofs of concept, and employee training already in place. AI, GenAI, and agentic AI ranked as the number one policy and technology priority for 2026.

These tools support tasks such as summarizing documents, responding to emails, writing code, and managing schedules.

“GenAI tools often have privileged access to systems and data, which enhances their operational value but also makes them appealing targets for threat actors,” the researchers wrote.

Model behavior creates a security gap

Prompt injection has been documented for more than a decade, with research tracing it back to 2013. Targeted training can improve how models handle these attacks, but studies indicate that training alone does not provide sufficient protection.

The primary weakness lies in how language models process input. The report notes that LLMs do not separate instructions from other data. A model can process embedded malicious instructions in the same way as a normal request.

This behavior enables two types of prompt injection. Direct prompt injection occurs through direct interaction with the model, including attempts to override safeguards.

Indirect prompt injection places malicious instructions inside external content such as web pages, emails, or documents that AI systems later retrieve and process.

“Prompt injections can poison GenAI agentic databases, allowing the attack to persist across user sessions and be referenced by other applications, as well as poison external datastores referenced by the GenAI ecosystem, like cloud storage and email inboxes. If GenAI applications are allowed to execute code, then they can be manipulated to do so remotely on behalf of attackers,” they warn.

OWASP has identified prompt injection as the top risk category for GenAI and LLM applications.

Examples show how attacks unfold

Several proof-of-concept scenarios illustrate how prompt injection can move through connected AI systems and applications.

One example involves an AI agent scanning a webpage. Hidden instructions embedded in markup, metadata, or rendered content can direct the agent to collect and transmit sensitive data. In a demonstration, a GenAI code assistant processed instructions hidden in a documentation page and sent code snippets and AWS API key data to an external URL. The destination service was allowlisted in default Antigravity settings.

An additional case involved an update to the Amazon Q extension for Visual Studio Code in July 2025. The update introduced a prompt that instructed the AI agent to delete non-hidden files, terminate AWS servers, and remove cloud data. AWS issued a patch two days after the update and released a security bulletin.

The Morris II worm demonstrates how prompt injection can propagate through systems. A malicious prompt embedded in an email entered a retrieval-augmented generation database through an AI email assistant. The assistant then generated additional emails containing the same malicious prompt along with sensitive information.

The GeminiJack case involved malicious instructions embedded in enterprise data sources such as Google Docs or calendar entries. When retrieved through search, the instructions triggered data exfiltration to external infrastructure. Google later separated Vertex AI Search from Gemini Enterprise to address the issue.

Controls focus on limiting access and oversight

Organizations are advised to define acceptable use policies for AI tools and provide user training on handling sensitive data and recognizing malicious prompts.

Other measures include keeping track of which systems and data AI platforms can reach, enforcing least privilege, and requiring human approval before actions that involve sensitive data or code execution. Regular log reviews can also help identify unusual behavior.

Don't miss