AI’s split personality: Solving crimes while helping conceal them
What happens when investigators and cybercriminals start using the same technology? AI is now doing both, helping law enforcement trace attacks while also being tested for its …
Most AI privacy research looks the wrong way
Most research on LLM privacy has focused on the wrong problem, according to a new paper by researchers from Carnegie Mellon University and Northeastern University. The authors …
When trusted AI connections turn hostile
Researchers have revealed a new security blind spot in how LLM applications connect to external systems. Their study shows that malicious Model Context Protocol (MCP) servers …
GPT needs to be rewired for security
LLMs and agentic systems already shine at everyday productivity, including transcribing and summarizing meetings, extracting action items, prioritizing critical emails, and …
A2AS framework targets prompt injection and agentic AI security risks
AI systems are now deeply embedded in business operations, and this introduces new security risks that traditional controls are not built to handle. The newly released A2AS …
Microsoft spots LLM-obfuscated phishing attack
Cybercriminals are increasingly using AI-powered tools and (malicious) large language models to create convincing, error-free emails, deepfakes, online personas, …
Building a stronger SOC through AI augmentation
In this Help Net Security interview, Tim Bramble, Director of Threat Detection and Response at OpenText, discusses how SOC teams are gaining value from AI in detecting and …
LLMs can boost cybersecurity decisions, but not for everyone
LLMs are moving fast from experimentation to daily use in cybersecurity. Teams are starting to use them to sort through threat intelligence, guide incident response, and help …
Google introduces VaultGemma, a differentially private LLM built for secure data handling
Google has released VaultGemma, a large language model designed to keep sensitive data private during training. The model uses differential privacy techniques to prevent …
Garak: Open-source LLM vulnerability scanner
LLMs can make mistakes, leak data, or be tricked into doing things they were not meant to do. Garak is a free, open-source tool designed to test these weaknesses. It checks …
BruteForceAI: Free AI-powered login brute force tool
BruteForceAI is a penetration testing tool that uses LLMs to improve the way brute-force attacks are carried out. Instead of relying on manual setup, the tool can analyze HTML …
AI isn’t taking over the world, but here’s what you should worry about
In this Help Net Security video, Josh Meier, Senior Generative AI Author at Pluralsight, debunks the myth that AI could “escape” servers or act on its own. He explains how …
Featured news
Resources
Don't miss
- Your dependencies are 278 days out of date and your pipelines aren’t protected
- Security debt is becoming a governance issue for CISOs
- BlacksmithAI: Open-source AI-powered penetration testing framework
- When cyber threats start thinking for themselves
- IronCurtain: An open-source, safeguard layer for autonomous AI assistants