Privacy risks sit inside the ads that fill your social media feed
Regulatory limits on explicit targeting have not stopped algorithmic profiling on the web. Ad optimization systems still adapt which ads appear based on users’ private …
AI might be the answer for better phishing resilience
Phishing is still a go-to tactic for attackers, which is why even small gains in user training are worth noticing. A recent research project from the University of Bari looked …
LLM privacy policies keep getting longer, denser, and nearly impossible to decode
People expect privacy policies to explain what happens to their data. What users get instead is a growing wall of text that feels harder to read each year. In a new study, …
LLM vulnerability patching skills remain limited
Security teams are wondering whether LLMs can help speed up patching. A new study tests that idea and shows where the tools hold up and where they fall short. The researchers …
LLMs are everywhere in your stack and every layer brings new risk
LLMs are moving deeper into enterprise products and workflows, and that shift is creating new pressure on security leaders. A new guide from DryRun Security outlines how these …
AI agents break rules in unexpected ways
AI agents are starting to take on tasks that used to be handled by people. These systems plan steps, call tools, and carry out actions without a person approving every move. …
NVIDIA research shows how agentic AI fails under attack
Enterprises are rushing to deploy agentic systems that plan, use tools, and make decisions with less human guidance than earlier AI models. This new class of systems also …
AI vs. you: Who’s better at permission decisions?
A single tap on a permission prompt can decide how far an app reaches into a user’s personal data. Most of these calls happen during installation. The number of prompts keeps …
Social data puts user passwords at risk in unexpected ways
Many CISOs already assume that social media creates new openings for password guessing, but new research helps show what that risk looks like in practice. The findings reveal …
Small language models step into the fight against phishing sites
Phishing sites keep rising, and security teams are searching for ways to sort suspicious pages at speed. A recent study explores whether small language models (SLMs) can scan …
DeepTeam: Open-source LLM red teaming framework
Security teams are pushing large language models into products faster than they can test them, which makes any new red teaming method worth paying attention to. DeepTeam is an …
BlueCodeAgent helps developers secure AI-generated code
When AI models generate code, they deliver power and risk at the same time for security teams. That tension is at the heart of the new tool called BlueCodeAgent, designed to …
Featured news
Resources
Don't miss
- Crypto theft in 2025: North Korean hackers continue to dominate
- Clipping Scripted Sparrow’s wings: Tracking a global phishing ring
- Microsoft 365 users targeted in device code phishing attacks
- More than half of public vulnerabilities bypass leading WAFs
- The soft underbelly of space isn’t in orbit, it’s on the ground