Help Net Security newsletters: Daily and weekly news, cybersecurity jobs, open source projects, breaking news – subscribe here!

Please turn on your JavaScript for this page to function normally.
LLM
Small language models step into the fight against phishing sites

Phishing sites keep rising, and security teams are searching for ways to sort suspicious pages at speed. A recent study explores whether small language models (SLMs) can scan …

DeepTeam
DeepTeam: Open-source LLM red teaming framework

Security teams are pushing large language models into products faster than they can test them, which makes any new red teaming method worth paying attention to. DeepTeam is an …

code
BlueCodeAgent helps developers secure AI-generated code

When AI models generate code, they deliver power and risk at the same time for security teams. That tension is at the heart of the new tool called BlueCodeAgent, designed to …

mobile
The long conversations that reveal how scammers work

Online scammers often take weeks to build trust before making a move, which makes their work hard to study. A research team from UC San Diego built a system that does the …

Arm Metis
Metis: Open-source, AI-driven tool for deep security code review

Metis is an open source tool that uses AI to help engineers run deep security reviews on code. Arm’s product security team built Metis to spot subtle flaws that are often …

LLM
Chinese cyber spies used Claude AI to automate 90% of their attack campaign, Anthropic claims

Anthropic threat researchers believe that they’ve uncovered and disrupted the first documented case of a cyberattack executed with the help of its agentic AI and minimal …

human machine
Autonomous AI could challenge how we define criminal behavior

Whether we ever build AI that thinks like a person is still uncertain. What seems more realistic is a future with more independent machines. These systems already work across …

OpenGuardrails
OpenGuardrails: A new open-source model aims to make AI safer for real-world use

When you ask a large language model to summarize a policy or write code, you probably assume it will behave safely. But what happens when someone tries to trick it into …

AI threats
Google uncovers malware using LLMs to operate and evade detection

PromptLock, the AI-powered proof-of-concept ransomware developed by researchers at NYU Tandon and initially mistaken for an active threat by ESET, is no longer an isolated …

PortGPT
PortGPT: How researchers taught an AI to backport security patches automatically

Keeping older software versions secure often means backporting patches from newer releases. It is a routine but tedious job, especially for large open-source projects such as …

shadow AI
Shadow AI: New ideas emerge to tackle an old problem in new form

Shadow AI is the second-most prevalent form of shadow IT in corporate environments, 1Password’s latest annual report has revealed. Based on a survey of over 5,000 …

chatbot
AI chatbots are sliding toward a privacy crisis

AI chat tools are taking over offices, but at what cost to privacy? People often feel anonymous in chat interfaces and may share personal data without realizing the risks. …

Don't miss

Cybersecurity news
Daily newsletter sent Monday-Friday
Weekly newsletter sent on Mondays
Editor's choice newsletter sent twice a month
Periodical newsletter released when there is breaking news
Weekly newsletter listing new cybersecurity job positions
Monthly newsletter focusing on open source cybersecurity tools