Help Net Security newsletters: Daily and weekly news, cybersecurity jobs, open source projects, breaking news – subscribe here!

Please turn on your JavaScript for this page to function normally.
danger
You can’t trust AI chatbots not to serve you phishing pages, malicious downloads, or bad code

Popular AI chatbots powered by large language models (LLMs) often fail to provide accurate information on any topic, but researchers expect threat actors to ramp up their …

Zluri
AI tools are everywhere, and most are off your radar

80% of AI tools used by employees go unmanaged by IT or security teams, according to Zluri’s The State of AI in the Workplace 2025 report. AI is popping up all over the …

chess
How cybercriminals are weaponizing AI and what CISOs should do about it

In a recent case tracked by Flashpoint, a finance worker at a global firm joined a video call that seemed normal. By the end of it, $25 million was gone. Everyone on the call …

large language models
We know GenAI is risky, so why aren’t we fixing its flaws?

Even though GenAI threats are a top concern for both security teams and leadership, the current level of testing and remediation for LLM and AI-powered applications isn’t …

privacy
Users lack control as major AI platforms share personal info with third parties

Some of the most popular generative AI and large language model (LLM) platforms, from companies like Meta, Google, and Microsoft, are collecting sensitive data and sharing it …

free AI coding security
Free AI coding security rules now available on GitHub

Developers are turning to AI coding assistants to save time and speed up their work. But these tools can also introduce security risks if they suggest flawed or unsafe code. …

Paolo del Mundo
Before scaling GenAI, map your LLM usage and risk zones

In this Help Net Security interview, Paolo del Mundo, Director of Application and Cloud Security at The Motley Fool, discusses how organizations can scale their AI usage by …

AI
What CISOs need to know about agentic AI

GenAI has been the star of the show lately. Tools like ChatGPT impressed everyone with how well they can summarize, write, and respond. But something new is gaining ground: …

large language models
86% of all LLM usage is driven by ChatGPT

ChatGPT remains the most widely used LLM among New Relic customers, making up over 86% of all tokens processed. Developers and enterprises are shifting to OpenAI’s latest …

Agentic AI
Securing agentic AI systems before they go rogue

In this Help Net Security video, Eoin Wickens, Director of Threat Intelligence at HiddenLayer, explores the security risks posed by agentic AI. He breaks down how agentic AI …

large language models
The hidden risks of LLM autonomy

Large language models (LLMs) have come a long way from the once passive and simple chatbots that could respond to basic user prompts or look up the internet to generate …

Peter Garraghan
Why security teams cannot rely solely on AI guardrails

In this Help Net Security interview, Dr. Peter Garraghan, CEO of Mindgard, discusses their research around vulnerabilities in the guardrails used to protect large AI models. …

Don't miss

Cybersecurity news
Daily newsletter sent Monday-Friday
Weekly newsletter sent on Mondays
Editor's choice newsletter sent twice a month
Periodical newsletter released when there is breaking news
Weekly newsletter listing new cybersecurity job positions
Monthly newsletter focusing on open source cybersecurity tools