Help Net Security newsletters: Daily and weekly news, cybersecurity jobs, open source projects, breaking news – subscribe here!

Please turn on your JavaScript for this page to function normally.
AI agent
AI’s split personality: Solving crimes while helping conceal them

What happens when investigators and cybercriminals start using the same technology? AI is now doing both, helping law enforcement trace attacks while also being tested for its …

forget
Most AI privacy research looks the wrong way

Most research on LLM privacy has focused on the wrong problem, according to a new paper by researchers from Carnegie Mellon University and Northeastern University. The authors …

danger
When trusted AI connections turn hostile

Researchers have revealed a new security blind spot in how LLM applications connect to external systems. Their study shows that malicious Model Context Protocol (MCP) servers …

LLM
GPT needs to be rewired for security

LLMs and agentic systems already shine at everyday productivity, including transcribing and summarizing meetings, extracting action items, prioritizing critical emails, and …

AI
A2AS framework targets prompt injection and agentic AI security risks

AI systems are now deeply embedded in business operations, and this introduces new security risks that traditional controls are not built to handle. The newly released A2AS …

AI
Microsoft spots LLM-obfuscated phishing attack

Cybercriminals are increasingly using AI-powered tools and (malicious) large language models to create convincing, error-free emails, deepfakes, online personas, …

Tim Bramble
Building a stronger SOC through AI augmentation

In this Help Net Security interview, Tim Bramble, Director of Threat Detection and Response at OpenText, discusses how SOC teams are gaining value from AI in detecting and …

LLM
LLMs can boost cybersecurity decisions, but not for everyone

LLMs are moving fast from experimentation to daily use in cybersecurity. Teams are starting to use them to sort through threat intelligence, guide incident response, and help …

Google
Google introduces VaultGemma, a differentially private LLM built for secure data handling

Google has released VaultGemma, a large language model designed to keep sensitive data private during training. The model uses differential privacy techniques to prevent …

Garak
Garak: Open-source LLM vulnerability scanner

LLMs can make mistakes, leak data, or be tricked into doing things they were not meant to do. Garak is a free, open-source tool designed to test these weaknesses. It checks …

BruteForceAI
BruteForceAI: Free AI-powered login brute force tool

BruteForceAI is a penetration testing tool that uses LLMs to improve the way brute-force attacks are carried out. Instead of relying on manual setup, the tool can analyze HTML …

AI threats
AI isn’t taking over the world, but here’s what you should worry about

In this Help Net Security video, Josh Meier, Senior Generative AI Author at Pluralsight, debunks the myth that AI could “escape” servers or act on its own. He explains how …

Don't miss

Cybersecurity news
Daily newsletter sent Monday-Friday
Weekly newsletter sent on Mondays
Editor's choice newsletter sent twice a month
Periodical newsletter released when there is breaking news
Weekly newsletter listing new cybersecurity job positions
Monthly newsletter focusing on open source cybersecurity tools