Help Net Security newsletters: Daily and weekly news, cybersecurity jobs, open source projects, breaking news – subscribe here!

Please turn on your JavaScript for this page to function normally.
AI
What CISOs need to know about agentic AI

GenAI has been the star of the show lately. Tools like ChatGPT impressed everyone with how well they can summarize, write, and respond. But something new is gaining ground: …

large language models
86% of all LLM usage is driven by ChatGPT

ChatGPT remains the most widely used LLM among New Relic customers, making up over 86% of all tokens processed. Developers and enterprises are shifting to OpenAI’s latest …

Agentic AI
Securing agentic AI systems before they go rogue

In this Help Net Security video, Eoin Wickens, Director of Threat Intelligence at HiddenLayer, explores the security risks posed by agentic AI. He breaks down how agentic AI …

large language models
The hidden risks of LLM autonomy

Large language models (LLMs) have come a long way from the once passive and simple chatbots that could respond to basic user prompts or look up the internet to generate …

Peter Garraghan
Why security teams cannot rely solely on AI guardrails

In this Help Net Security interview, Dr. Peter Garraghan, CEO of Mindgard, discusses their research around vulnerabilities in the guardrails used to protect large AI models. …

AI Agents in Action
Review: AI Agents in Action

If you’re trying to make sense of how to actually build AI agents, not just talk about them, AI Agents in Action might be for you. About the author Michael Lanham, Lead …

Michael Pound
Even the best safeguards can’t stop LLMs from being fooled

In this Help Net Security interview, Michael Pound, Associate Professor at the University of Nottingham shares his insights on the cybersecurity risks associated with LLMs. He …

SWE-agent
SWE-agent: Open-source tool uses LLMs to fix issues in GitHub repositories

By connecting powerful language models like GPT-4o and Claude Sonnet 3.5 to real-world tools, the open-source tool SWE-agent allows them to autonomously perform complex tasks: …

Jason Lord
When AI agents go rogue, the fallout hits the enterprise

In this Help Net Security interview, Jason Lord, CTO at AutoRABIT, discusses the cybersecurity risks posed by AI agents integrated into real-world systems. Issues like …

malicious package
Package hallucination: LLMs may deliver malicious code to careless devs

LLMs’ tendency to “hallucinate” code packages that don’t exist could become the basis for a new type of supply chain attack dubbed …

AI threats
The quiet data breach hiding in AI workflows

As AI becomes embedded in daily business workflows, the risk of data exposure increases. Prompt leaks are not rare exceptions. They are a natural outcome of how employees use …

large language models
Excessive agency in LLMs: The growing risk of unchecked autonomy

For an AI agent to “think” and act autonomously, it must be granted agency; that is, it must be allowed to integrate with other systems, read and analyze data, and have …

Don't miss

Cybersecurity news
Daily newsletter sent Monday-Friday
Weekly newsletter sent on Mondays
Editor's choice newsletter sent twice a month
Periodical newsletter released when there is breaking news
Weekly newsletter listing new cybersecurity job positions
Monthly newsletter focusing on open source cybersecurity tools