Free AI coding security rules now available on GitHub
Developers are turning to AI coding assistants to save time and speed up their work. But these tools can also introduce security risks if they suggest flawed or unsafe code. …
Before scaling GenAI, map your LLM usage and risk zones
In this Help Net Security interview, Paolo del Mundo, Director of Application and Cloud Security at The Motley Fool, discusses how organizations can scale their AI usage by …
What CISOs need to know about agentic AI
GenAI has been the star of the show lately. Tools like ChatGPT impressed everyone with how well they can summarize, write, and respond. But something new is gaining ground: …
86% of all LLM usage is driven by ChatGPT
ChatGPT remains the most widely used LLM among New Relic customers, making up over 86% of all tokens processed. Developers and enterprises are shifting to OpenAI’s latest …
Securing agentic AI systems before they go rogue
In this Help Net Security video, Eoin Wickens, Director of Threat Intelligence at HiddenLayer, explores the security risks posed by agentic AI. He breaks down how agentic AI …
The hidden risks of LLM autonomy
Large language models (LLMs) have come a long way from the once passive and simple chatbots that could respond to basic user prompts or look up the internet to generate …
Why security teams cannot rely solely on AI guardrails
In this Help Net Security interview, Dr. Peter Garraghan, CEO of Mindgard, discusses their research around vulnerabilities in the guardrails used to protect large AI models. …
Review: AI Agents in Action
If you’re trying to make sense of how to actually build AI agents, not just talk about them, AI Agents in Action might be for you. About the author Michael Lanham, Lead …
Even the best safeguards can’t stop LLMs from being fooled
In this Help Net Security interview, Michael Pound, Associate Professor at the University of Nottingham shares his insights on the cybersecurity risks associated with LLMs. He …
SWE-agent: Open-source tool uses LLMs to fix issues in GitHub repositories
By connecting powerful language models like GPT-4o and Claude Sonnet 3.5 to real-world tools, the open-source tool SWE-agent allows them to autonomously perform complex tasks: …
When AI agents go rogue, the fallout hits the enterprise
In this Help Net Security interview, Jason Lord, CTO at AutoRABIT, discusses the cybersecurity risks posed by AI agents integrated into real-world systems. Issues like …
Package hallucination: LLMs may deliver malicious code to careless devs
LLMs’ tendency to “hallucinate” code packages that don’t exist could become the basis for a new type of supply chain attack dubbed …
Featured news
Resources
Don't miss
- AI isn’t one system, and your threat model shouldn’t be either
- LLMs work better together in smart contract audits
- Product showcase: NAKIVO v11.1 advances MSP service delivery with secure multi-tenant management
- Crypto theft in 2025: North Korean hackers continue to dominate
- Clipping Scripted Sparrow’s wings: Tracking a global phishing ring