LLMs change their answers based on who’s asking
AI chatbots may deliver unequal answers depending on who is asking the question. A new study from the MIT Center for Constructive Communication finds that LLMs provide less …
Waiting for AI superintelligence? Don’t hold your breath
AI’s impact on systems, security, and decision-making is already permanent. Superintelligence, often referred to as artificial superintelligence (ASI), describes a …
Unbounded AI use can break your systems
In this Help Net Security video, James Wickett, CEO of DryRun Security, explains cyber risks many teams underestimate as they add AI to products. He focuses on how fast LLM …
EU’s Chat Control could put government monitoring inside robots
Cybersecurity debates around surveillance usually stay inside screens. A new academic study argues that this boundary no longer holds when communication laws extend into …
Turning plain language into firewall rules
Firewall rules often begin as a sentence in someone’s head. A team needs access to an application. A service needs to be blocked after hours. Translating those ideas into …
AI security risks are also cultural and developmental
Security teams spend much of their time tracking vulnerabilities, abuse patterns, and system failures. A new study argues that many AI risks sit deeper than technical flaws. …
LLMs are automating the human part of romance scams
Romance scams succeed because they feel human. New research shows that feeling no longer requires a person on the other side of the chat. The three stages of a romance-baiting …
LLMs can assist with vulnerability scoring, but context still matters
Every new vulnerability disclosure adds another decision point for already stretched security teams. A recent study explores whether LLMs can take on part of that burden by …
Governance maturity defines enterprise AI confidence
AI security has reached a point where enthusiasm alone no longer carries organizations forward. New Cloud Security Alliance research shows that governance has become the main …
DIG AI: Uncensored darknet AI assistant at the service of criminals and terrorists
Resecurity has identified the emergence of uncensored darknet AI assistants, enabling threat actors to leverage advanced data processing capabilities for malicious purposes. …
Browser agents don’t always respect your privacy choices
Browser agents promise to handle online tasks without constant user input. They can shop, book reservations, and manage accounts by driving a web browser through an AI model. …
AI isn’t one system, and your threat model shouldn’t be either
In this Help Net Security interview, Naor Penso, CISO at Cerebras Systems, explains how to threat model modern AI stacks without treating them as a single risk. He discusses …
Featured news
Resources
Don't miss
- Self-spreading npm malware targets developers in new supply chain attack
- Microsoft extends security patching for three Windows products at a price
- AI is becoming part of everyday criminal workflows
- Why SOCs are moving toward autonomous security operations in 2026
- Binding Operational Directive 26-02 sets deadlines for edge device replacement