research
Health insurance lead sites sell personal data within seconds of form submission
Lead generation websites that offer health insurance quotes collect sensitive personal data and sell it to multiple buyers within seconds of a user clicking submit. A study by …
6G network design puts AI at the center of spectrum, routing, and fault management
Wireless network operators are preparing for a generation of infrastructure where AI is built into the architecture from the start. Sixth-generation networks, expected to …
Google study finds LLMs are embedded at every stage of abuse detection
Online platforms are running large language models at every stage of LLM content moderation, from generating training data to auditing their own systems for bias. Researchers …
Which messaging app takes the most limited approach to permissions on Android?
Messaging apps handle sensitive conversations, contacts, and media, and their behavior on a device varies in ways that affect privacy. An analysis of Android versions of …
Tracking drones with the 5G tower down the street
Drone detection in cities is expensive. Dedicated radar installations are cost-prohibitive at scale, cameras have limited range and stop working well at night, and LiDAR …
Malware detectors trained on one dataset often stumble on another
Machine learning models built to catch malware on Windows systems are typically evaluated on data that closely resembles their training set. In practice, the malware arriving …
Breaking out: Can AI agents escape their sandboxes?
Container sandboxes are part of routine AI agent testing and deployment. Agents use them to run code, edit files, and interact with system resources without direct access to …
Don’t count on government guidance after a smart home breach
People are filling their homes with internet-connected cameras, speakers, locks, and routers. When one of those devices is compromised, the next steps are often unclear. …
A nearly undetectable LLM attack needs only a handful of poisoned samples
Prompt engineering has become a standard part of how large language models are deployed in production, and it introduces an attack surface most organizations have not yet …
Google’s TurboQuant cuts AI memory use without losing accuracy
Large language models carry a persistent scaling problem. As context windows grow, the memory required to store key-value (KV) caches expands proportionally, consuming GPU …
EDR killers are now standard equipment in ransomware attacks
Ransomware attackers routinely deploy tools designed to disable endpoint detection and response software before launching encryptors. These tools, known as EDR killers, have …
Hidden instructions in README files can make AI agents leak data
Developers rely on AI coding agents to set up projects, install dependencies, and run commands by following instructions in repository README files, which provide setup …
Featured news
Resources
Don't miss
- Bringing governance and visibility to machine and AI identities
- ClickFix campaign delivers Mac malware via fake Apple page
- Poisoned “Office 365” search results lead to stolen paychecks
- What vibe hunting gets right about AI threat hunting, and where it breaks down
- Health insurance lead sites sell personal data within seconds of form submission