Companies want the benefits of AI without the cyber blowback
51% of European IT and cybersecurity professionals said they expect AI-driven cyber threats and deepfakes to keep them up at night in 2026, according to ISACA. AI takes centre …
AI’s split personality: Solving crimes while helping conceal them
What happens when investigators and cybercriminals start using the same technology? AI is now doing both, helping law enforcement trace attacks while also being tested for its …
Most AI privacy research looks the wrong way
Most research on LLM privacy has focused on the wrong problem, according to a new paper by researchers from Carnegie Mellon University and Northeastern University. The authors …
When trusted AI connections turn hostile
Researchers have revealed a new security blind spot in how LLM applications connect to external systems. Their study shows that malicious Model Context Protocol (MCP) servers …
GPT needs to be rewired for security
LLMs and agentic systems already shine at everyday productivity, including transcribing and summarizing meetings, extracting action items, prioritizing critical emails, and …
A2AS framework targets prompt injection and agentic AI security risks
AI systems are now deeply embedded in business operations, and this introduces new security risks that traditional controls are not built to handle. The newly released A2AS …
Microsoft spots LLM-obfuscated phishing attack
Cybercriminals are increasingly using AI-powered tools and (malicious) large language models to create convincing, error-free emails, deepfakes, online personas, …
Building a stronger SOC through AI augmentation
In this Help Net Security interview, Tim Bramble, Director of Threat Detection and Response at OpenText, discusses how SOC teams are gaining value from AI in detecting and …
LLMs can boost cybersecurity decisions, but not for everyone
LLMs are moving fast from experimentation to daily use in cybersecurity. Teams are starting to use them to sort through threat intelligence, guide incident response, and help …
Google introduces VaultGemma, a differentially private LLM built for secure data handling
Google has released VaultGemma, a large language model designed to keep sensitive data private during training. The model uses differential privacy techniques to prevent …
Garak: Open-source LLM vulnerability scanner
LLMs can make mistakes, leak data, or be tricked into doing things they were not meant to do. Garak is a free, open-source tool designed to test these weaknesses. It checks …
BruteForceAI: Free AI-powered login brute force tool
BruteForceAI is a penetration testing tool that uses LLMs to improve the way brute-force attacks are carried out. Instead of relying on manual setup, the tool can analyze HTML …
Featured news
Resources
Don't miss
- AI isn’t one system, and your threat model shouldn’t be either
- LLMs work better together in smart contract audits
- Product showcase: NAKIVO v11.1 advances MSP service delivery with secure multi-tenant management
- Crypto theft in 2025: North Korean hackers continue to dominate
- Clipping Scripted Sparrow’s wings: Tracking a global phishing ring