Help Net Security newsletters: Daily and weekly news, cybersecurity jobs, open source projects, breaking news – subscribe here!

Please turn on your JavaScript for this page to function normally.
lock
LLM vulnerability patching skills remain limited

Security teams are wondering whether LLMs can help speed up patching. A new study tests that idea and shows where the tools hold up and where they fall short. The researchers …

large language models
LLMs are everywhere in your stack and every layer brings new risk

LLMs are moving deeper into enterprise products and workflows, and that shift is creating new pressure on security leaders. A new guide from DryRun Security outlines how these …

AI
AI agents break rules in unexpected ways

AI agents are starting to take on tasks that used to be handled by people. These systems plan steps, call tools, and carry out actions without a person approving every move. …

Agentic AI
NVIDIA research shows how agentic AI fails under attack

Enterprises are rushing to deploy agentic systems that plan, use tools, and make decisions with less human guidance than earlier AI models. This new class of systems also …

Brain
AI vs. you: Who’s better at permission decisions?

A single tap on a permission prompt can decide how far an app reaches into a user’s personal data. Most of these calls happen during installation. The number of prompts keeps …

passwords
Social data puts user passwords at risk in unexpected ways

Many CISOs already assume that social media creates new openings for password guessing, but new research helps show what that risk looks like in practice. The findings reveal …

LLM
Small language models step into the fight against phishing sites

Phishing sites keep rising, and security teams are searching for ways to sort suspicious pages at speed. A recent study explores whether small language models (SLMs) can scan …

DeepTeam
DeepTeam: Open-source LLM red teaming framework

Security teams are pushing large language models into products faster than they can test them, which makes any new red teaming method worth paying attention to. DeepTeam is an …

code
BlueCodeAgent helps developers secure AI-generated code

When AI models generate code, they deliver power and risk at the same time for security teams. That tension is at the heart of the new tool called BlueCodeAgent, designed to …

mobile
The long conversations that reveal how scammers work

Online scammers often take weeks to build trust before making a move, which makes their work hard to study. A research team from UC San Diego built a system that does the …

Arm Metis
Metis: Open-source, AI-driven tool for deep security code review

Metis is an open source tool that uses AI to help engineers run deep security reviews on code. Arm’s product security team built Metis to spot subtle flaws that are often …

LLM
Chinese cyber spies used Claude AI to automate 90% of their attack campaign, Anthropic claims

Anthropic threat researchers believe that they’ve uncovered and disrupted the first documented case of a cyberattack executed with the help of its agentic AI and minimal …

Don't miss

Cybersecurity news
Daily newsletter sent Monday-Friday
Weekly newsletter sent on Mondays
Editor's choice newsletter sent twice a month
Periodical newsletter released when there is breaking news
Weekly newsletter listing new cybersecurity job positions
Monthly newsletter focusing on open source cybersecurity tools