LLM vulnerability patching skills remain limited
Security teams are wondering whether LLMs can help speed up patching. A new study tests that idea and shows where the tools hold up and where they fall short. The researchers …
LLMs are everywhere in your stack and every layer brings new risk
LLMs are moving deeper into enterprise products and workflows, and that shift is creating new pressure on security leaders. A new guide from DryRun Security outlines how these …
AI agents break rules in unexpected ways
AI agents are starting to take on tasks that used to be handled by people. These systems plan steps, call tools, and carry out actions without a person approving every move. …
NVIDIA research shows how agentic AI fails under attack
Enterprises are rushing to deploy agentic systems that plan, use tools, and make decisions with less human guidance than earlier AI models. This new class of systems also …
AI vs. you: Who’s better at permission decisions?
A single tap on a permission prompt can decide how far an app reaches into a user’s personal data. Most of these calls happen during installation. The number of prompts keeps …
Social data puts user passwords at risk in unexpected ways
Many CISOs already assume that social media creates new openings for password guessing, but new research helps show what that risk looks like in practice. The findings reveal …
Small language models step into the fight against phishing sites
Phishing sites keep rising, and security teams are searching for ways to sort suspicious pages at speed. A recent study explores whether small language models (SLMs) can scan …
DeepTeam: Open-source LLM red teaming framework
Security teams are pushing large language models into products faster than they can test them, which makes any new red teaming method worth paying attention to. DeepTeam is an …
BlueCodeAgent helps developers secure AI-generated code
When AI models generate code, they deliver power and risk at the same time for security teams. That tension is at the heart of the new tool called BlueCodeAgent, designed to …
The long conversations that reveal how scammers work
Online scammers often take weeks to build trust before making a move, which makes their work hard to study. A research team from UC San Diego built a system that does the …
Metis: Open-source, AI-driven tool for deep security code review
Metis is an open source tool that uses AI to help engineers run deep security reviews on code. Arm’s product security team built Metis to spot subtle flaws that are often …
Chinese cyber spies used Claude AI to automate 90% of their attack campaign, Anthropic claims
Anthropic threat researchers believe that they’ve uncovered and disrupted the first documented case of a cyberattack executed with the help of its agentic AI and minimal …
Featured news
Resources
Don't miss
- 40 open-source tools redefining how security teams secure the stack
- Password habits are changing, and the data shows how far we’ve come
- Product showcase: Tuta – secure, encrypted, private email
- Henkel CISO on the messy truth of monitoring factories built across decades
- The hidden dynamics shaping who produces influential cybersecurity research