Help Net Security newsletters: Daily and weekly news, cybersecurity jobs, open source projects, breaking news – subscribe here!

Please turn on your JavaScript for this page to function normally.
AI code
AI can write your code, but nearly half of it may be insecure

While GenAI excels at producing functional code, it introduces security vulnerabilities in 45 percent of cases, according to Veracode’s 2025 GenAI Code Security Report, which …

AI threats
What attackers know about your company thanks to AI

In this Help Net Security video, Tom Cross, Head of Threat Research at GetReal Security, explores how generative AI is empowering threat actors. He breaks down three key …

AI
New AI model offers faster, greener way for vulnerability detection

A team of researchers has developed a new AI model, called White-Basilisk, that detects software vulnerabilities more efficiently than much larger systems. The model’s release …

Alisdair Faulkner
Fighting AI with AI: How Darwinium is reshaping fraud defense

AI agents are showing up in more parts of the customer journey, from product discovery to checkout. And fraudsters are also putting them to work, often with alarming success. …

Vulnhuntr
Vulnhuntr: Open-source tool to identify remotely exploitable vulnerabilities

Vulnhuntr is an open-source tool that finds remotely exploitable vulnerabilities. It uses LLMs and static code analysis to trace how data moves through an application, from …

China
Are your employees using Chinese GenAI tools at work?

Nearly one in 12 employees are using Chinese-developed generative AI tools at work, and they’re exposing sensitive data in the process. That’s according to new research …

software
Behind the code: How developers work in 2025

How are developers working in 2025? Docker surveyed over 4,500 people to find out, and the answers are a mix of progress and ongoing pain points. AI is gaining ground but …

shadow AI
Employees are quietly bringing AI to work and leaving security behind

While IT departments race to implement AI governance frameworks, many employees have already opened a backdoor for AI, according to ManageEngine. The rise of unauthorized AI …

danger
You can’t trust AI chatbots not to serve you phishing pages, malicious downloads, or bad code

Popular AI chatbots powered by large language models (LLMs) often fail to provide accurate information on any topic, but researchers expect threat actors to ramp up their …

Zluri
AI tools are everywhere, and most are off your radar

80% of AI tools used by employees go unmanaged by IT or security teams, according to Zluri’s The State of AI in the Workplace 2025 report. AI is popping up all over the …

chess
How cybercriminals are weaponizing AI and what CISOs should do about it

In a recent case tracked by Flashpoint, a finance worker at a global firm joined a video call that seemed normal. By the end of it, $25 million was gone. Everyone on the call …

large language models
We know GenAI is risky, so why aren’t we fixing its flaws?

Even though GenAI threats are a top concern for both security teams and leadership, the current level of testing and remediation for LLM and AI-powered applications isn’t …

Don't miss

Cybersecurity news
Daily newsletter sent Monday-Friday
Weekly newsletter sent on Mondays
Editor's choice newsletter sent twice a month
Periodical newsletter released when there is breaking news
Weekly newsletter listing new cybersecurity job positions
Monthly newsletter focusing on open source cybersecurity tools