Financial groups lay out a plan to fight AI identity attacks
Generative AI tools have brought the cost of deepfake production low enough that criminals and state-sponsored actors now use them routinely against financial institutions. A …
Breaking out: Can AI agents escape their sandboxes?
Container sandboxes are part of routine AI agent testing and deployment. Agents use them to run code, edit files, and interact with system resources without direct access to …
AI SOC vendors are selling a future that production deployments haven’t reached yet
Vendors selling AI-powered security operations platforms have built their pitches around a consistent set of promises: autonomous threat investigation, dramatic reductions in …
A nearly undetectable LLM attack needs only a handful of poisoned samples
Prompt engineering has become a standard part of how large language models are deployed in production, and it introduces an attack surface most organizations have not yet …
Google’s TurboQuant cuts AI memory use without losing accuracy
Large language models carry a persistent scaling problem. As context windows grow, the memory required to store key-value (KV) caches expands proportionally, consuming GPU …
Llamafile, Mozilla’s portable LLM runner, gets GPU support and a rebuilt core
Running a large language model on a single machine without cloud access or a container runtime remains a priority for practitioners working in air-gapped or …
AI got it wrong with high confidence. Now what?
In this Help Net Security interview, Christian Debes, Head of Data Analytics & AI at SPRYFOX, talks about the growing gap between what AI models do and what their …
Engineering trust: A security blueprint for autonomous AI agents
AI agents have evolved from just chatbots, answering questions to executing actions using various integrated tools, often autonomously, and as such the traditional security …
LLMs change their answers based on who’s asking
AI chatbots may deliver unequal answers depending on who is asking the question. A new study from the MIT Center for Constructive Communication finds that LLMs provide less …
Waiting for AI superintelligence? Don’t hold your breath
AI’s impact on systems, security, and decision-making is already permanent. Superintelligence, often referred to as artificial superintelligence (ASI), describes a …
Unbounded AI use can break your systems
In this Help Net Security video, James Wickett, CEO of DryRun Security, explains cyber risks many teams underestimate as they add AI to products. He focuses on how fast LLM …
EU’s Chat Control could put government monitoring inside robots
Cybersecurity debates around surveillance usually stay inside screens. A new academic study argues that this boundary no longer holds when communication laws extend into …
Featured news
Resources
Don't miss
- North Korean hackers linked to Axios npm supply chain compromise
- Google fixes Chrome zero-day with in-the-wild exploit (CVE-2026-5281)
- Mimecast makes enterprise email security deployable in minutes
- Financial groups lay out a plan to fight AI identity attacks
- EvilTokens ramps up device code phishing targeting Microsoft 365 users