Help Net Security newsletters: Daily and weekly news, cybersecurity jobs, open source projects, breaking news – subscribe here!

Please turn on your JavaScript for this page to function normally.
Eye
DIG AI: Uncensored darknet AI assistant at the service of criminals and terrorists

Resecurity has identified the emergence of uncensored darknet AI assistants, enabling threat actors to leverage advanced data processing capabilities for malicious purposes. …

Eye
Browser agents don’t always respect your privacy choices

Browser agents promise to handle online tasks without constant user input. They can shop, book reservations, and manage accounts by driving a web browser through an AI model. …

Naor Penso
AI isn’t one system, and your threat model shouldn’t be either

In this Help Net Security interview, Naor Penso, CISO at Cerebras Systems, explains how to threat model modern AI stacks without treating them as a single risk. He discusses …

AI
LLMs work better together in smart contract audits

Smart contract bugs continue to drain real money from blockchain systems, even after years of tooling and research. A new academic study suggests that large language models …

eyes
Privacy risks sit inside the ads that fill your social media feed

Regulatory limits on explicit targeting have not stopped algorithmic profiling on the web. Ad optimization systems still adapt which ads appear based on users’ private …

ai-powered phishing resilience
AI might be the answer for better phishing resilience

Phishing is still a go-to tactic for attackers, which is why even small gains in user training are worth noticing. A recent research project from the University of Bari looked …

LLM privacy policy
LLM privacy policies keep getting longer, denser, and nearly impossible to decode

People expect privacy policies to explain what happens to their data. What users get instead is a growing wall of text that feels harder to read each year. In a new study, …

lock
LLM vulnerability patching skills remain limited

Security teams are wondering whether LLMs can help speed up patching. A new study tests that idea and shows where the tools hold up and where they fall short. The researchers …

large language models
LLMs are everywhere in your stack and every layer brings new risk

LLMs are moving deeper into enterprise products and workflows, and that shift is creating new pressure on security leaders. A new guide from DryRun Security outlines how these …

AI
AI agents break rules in unexpected ways

AI agents are starting to take on tasks that used to be handled by people. These systems plan steps, call tools, and carry out actions without a person approving every move. …

Agentic AI
NVIDIA research shows how agentic AI fails under attack

Enterprises are rushing to deploy agentic systems that plan, use tools, and make decisions with less human guidance than earlier AI models. This new class of systems also …

Brain
AI vs. you: Who’s better at permission decisions?

A single tap on a permission prompt can decide how far an app reaches into a user’s personal data. Most of these calls happen during installation. The number of prompts keeps …

Don't miss

Cybersecurity news
Daily newsletter sent Monday-Friday
Weekly newsletter sent on Mondays
Editor's choice newsletter sent twice a month
Periodical newsletter released when there is breaking news
Weekly newsletter listing new cybersecurity job positions
Monthly newsletter focusing on open source cybersecurity tools