Small language models step into the fight against phishing sites
Phishing sites keep rising, and security teams are searching for ways to sort suspicious pages at speed. A recent study explores whether small language models (SLMs) can scan …
DeepTeam: Open-source LLM red teaming framework
Security teams are pushing large language models into products faster than they can test them, which makes any new red teaming method worth paying attention to. DeepTeam is an …
BlueCodeAgent helps developers secure AI-generated code
When AI models generate code, they deliver power and risk at the same time for security teams. That tension is at the heart of the new tool called BlueCodeAgent, designed to …
The long conversations that reveal how scammers work
Online scammers often take weeks to build trust before making a move, which makes their work hard to study. A research team from UC San Diego built a system that does the …
Metis: Open-source, AI-driven tool for deep security code review
Metis is an open source tool that uses AI to help engineers run deep security reviews on code. Arm’s product security team built Metis to spot subtle flaws that are often …
Chinese cyber spies used Claude AI to automate 90% of their attack campaign, Anthropic claims
Anthropic threat researchers believe that they’ve uncovered and disrupted the first documented case of a cyberattack executed with the help of its agentic AI and minimal …
Autonomous AI could challenge how we define criminal behavior
Whether we ever build AI that thinks like a person is still uncertain. What seems more realistic is a future with more independent machines. These systems already work across …
OpenGuardrails: A new open-source model aims to make AI safer for real-world use
When you ask a large language model to summarize a policy or write code, you probably assume it will behave safely. But what happens when someone tries to trick it into …
Google uncovers malware using LLMs to operate and evade detection
PromptLock, the AI-powered proof-of-concept ransomware developed by researchers at NYU Tandon and initially mistaken for an active threat by ESET, is no longer an isolated …
PortGPT: How researchers taught an AI to backport security patches automatically
Keeping older software versions secure often means backporting patches from newer releases. It is a routine but tedious job, especially for large open-source projects such as …
Shadow AI: New ideas emerge to tackle an old problem in new form
Shadow AI is the second-most prevalent form of shadow IT in corporate environments, 1Password’s latest annual report has revealed. Based on a survey of over 5,000 …
AI chatbots are sliding toward a privacy crisis
AI chat tools are taking over offices, but at what cost to privacy? People often feel anonymous in chat interfaces and may share personal data without realizing the risks. …
Featured news
Resources
Don't miss
- Hottest cybersecurity open-source tools of the month: November 2025
- Gainsight breach: Salesforce details attack window, issues investigation guidance
- New “HashJack” attack can hijack AI browsers and assistants
- Heineken CISO champions a new risk mindset to unlock innovation
- Small language models step into the fight against phishing sites