Help Net Security newsletters: Daily and weekly news, cybersecurity jobs, open source projects, breaking news – subscribe here!

Please turn on your JavaScript for this page to function normally.
OpenAI
OpenAI expands its cyber defense program with GPT-5.4-Cyber for vetted researchers

Defending critical software has long depended on the ability to find and fix vulnerabilities faster than attackers can exploit them. OpenAI is expanding a program designed to …

Aqsa Taylor
What vibe hunting gets right about AI threat hunting, and where it breaks down

In this Help Net Security interview, Aqsa Taylor, Chief Security Evangelist, Exaforce, explains vibe hunting, an AI-driven approach to threat detection that inverts …

prompt injection
Prompt injection tags along as GenAI enters daily government use

Routine use of GenAI has moved into daily operations in state and territorial government environments, placing new security risks within common workflows. A Center for …

Google
Google study finds LLMs are embedded at every stage of abuse detection

Online platforms are running large language models at every stage of LLM content moderation, from generating training data to auditing their own systems for bias. Researchers …

face
Financial groups lay out a plan to fight AI identity attacks

Generative AI tools have brought the cost of deepfake production low enough that criminals and state-sponsored actors now use them routinely against financial institutions. A …

AI agents
Breaking out: Can AI agents escape their sandboxes?

Container sandboxes are part of routine AI agent testing and deployment. Agents use them to run code, edit files, and interact with system resources without direct access to …

AI
AI SOC vendors are selling a future that production deployments haven’t reached yet

Vendors selling AI-powered security operations platforms have built their pitches around a consistent set of promises: autonomous threat investigation, dramatic reductions in …

AI vs human
A nearly undetectable LLM attack needs only a handful of poisoned samples

Prompt engineering has become a standard part of how large language models are deployed in production, and it introduces an attack surface most organizations have not yet …

Google
Google’s TurboQuant cuts AI memory use without losing accuracy

Large language models carry a persistent scaling problem. As context windows grow, the memory required to store key-value (KV) caches expands proportionally, consuming GPU …

llamafile
Llamafile, Mozilla’s portable LLM runner, gets GPU support and a rebuilt core

Running a large language model on a single machine without cloud access or a container runtime remains a priority for practitioners working in air-gapped or …

Christian Debes
AI got it wrong with high confidence. Now what?

In this Help Net Security interview, Christian Debes, Head of Data Analytics & AI at SPRYFOX, talks about the growing gap between what AI models do and what their …

AI agents
Engineering trust: A security blueprint for autonomous AI agents

AI agents have evolved from just chatbots, answering questions to executing actions using various integrated tools, often autonomously, and as such the traditional security …

Don't miss

Cybersecurity news
Daily newsletter sent Monday-Friday
Weekly newsletter sent on Mondays
Editor's choice newsletter sent twice a month
Periodical newsletter released when there is breaking news
Weekly newsletter listing new cybersecurity job positions
Monthly newsletter focusing on open source cybersecurity tools