Please turn on your JavaScript for this page to function normally.
Peter Garraghan
Why security teams cannot rely solely on AI guardrails

In this Help Net Security interview, Dr. Peter Garraghan, CEO of Mindgard, discusses their research around vulnerabilities in the guardrails used to protect large AI models. …

AI Agents in Action
Review: AI Agents in Action

If you’re trying to make sense of how to actually build AI agents, not just talk about them, AI Agents in Action might be for you. About the author Michael Lanham, Lead …

Michael Pound
Even the best safeguards can’t stop LLMs from being fooled

In this Help Net Security interview, Michael Pound, Associate Professor at the University of Nottingham shares his insights on the cybersecurity risks associated with LLMs. He …

SWE-agent
SWE-agent: Open-source tool uses LLMs to fix issues in GitHub repositories

By connecting powerful language models like GPT-4o and Claude Sonnet 3.5 to real-world tools, the open-source tool SWE-agent allows them to autonomously perform complex tasks: …

Jason Lord
When AI agents go rogue, the fallout hits the enterprise

In this Help Net Security interview, Jason Lord, CTO at AutoRABIT, discusses the cybersecurity risks posed by AI agents integrated into real-world systems. Issues like …

malicious package
Package hallucination: LLMs may deliver malicious code to careless devs

LLMs’ tendency to “hallucinate” code packages that don’t exist could become the basis for a new type of supply chain attack dubbed …

AI threats
The quiet data breach hiding in AI workflows

As AI becomes embedded in daily business workflows, the risk of data exposure increases. Prompt leaks are not rare exceptions. They are a natural outcome of how employees use …

large language models
Excessive agency in LLMs: The growing risk of unchecked autonomy

For an AI agent to “think” and act autonomously, it must be granted agency; that is, it must be allowed to integrate with other systems, read and analyze data, and have …

idea
The rise of compromised LLM attacks

In this Help Net Security video, Sohrob Kazerounian, Distinguished AI Researcher at Vectra AI, discusses how the ongoing rapid adoption of LLM-based applications has already …

AI
Two things you need in place to successfully adopt AI

Organizations should not shy away from taking advantage of AI tools, but they need to find the right balance between maximizing efficiency and mitigating organizational risk. …

Aaron Roberts
Man vs. machine: Striking the perfect balance in threat intelligence

In this Help Net Security interview, Aaron Roberts, Director at Perspective Intelligence, discusses how automation is reshaping threat intelligence. He explains that while AI …

Deepseek
DeepSeek’s popularity exploited by malware peddlers, scammers

As US-based AI companies struggle with the news that the recently released Chinese-made open source DeepSeek-R1 reasoning model performs as well as theirs for a fraction of …

Don't miss

Cybersecurity news
Daily newsletter sent Monday-Friday
Weekly newsletter sent on Mondays
Editor's choice newsletter sent twice a month
Periodical newsletter released whent there is breaking news
Weekly newsletter listing new cybersecurity job positions
Monthly newsletter focusing on open source cybersecurity tools