Before scaling GenAI, map your LLM usage and risk zones
In this Help Net Security interview, Paolo del Mundo, Director of Application and Cloud Security at The Motley Fool, discusses how organizations can scale their AI usage by …
What CISOs need to know about agentic AI
GenAI has been the star of the show lately. Tools like ChatGPT impressed everyone with how well they can summarize, write, and respond. But something new is gaining ground: …
86% of all LLM usage is driven by ChatGPT
ChatGPT remains the most widely used LLM among New Relic customers, making up over 86% of all tokens processed. Developers and enterprises are shifting to OpenAI’s latest …
Securing agentic AI systems before they go rogue
In this Help Net Security video, Eoin Wickens, Director of Threat Intelligence at HiddenLayer, explores the security risks posed by agentic AI. He breaks down how agentic AI …
The hidden risks of LLM autonomy
Large language models (LLMs) have come a long way from the once passive and simple chatbots that could respond to basic user prompts or look up the internet to generate …
Why security teams cannot rely solely on AI guardrails
In this Help Net Security interview, Dr. Peter Garraghan, CEO of Mindgard, discusses their research around vulnerabilities in the guardrails used to protect large AI models. …
Review: AI Agents in Action
If you’re trying to make sense of how to actually build AI agents, not just talk about them, AI Agents in Action might be for you. About the author Michael Lanham, Lead …
Even the best safeguards can’t stop LLMs from being fooled
In this Help Net Security interview, Michael Pound, Associate Professor at the University of Nottingham shares his insights on the cybersecurity risks associated with LLMs. He …
SWE-agent: Open-source tool uses LLMs to fix issues in GitHub repositories
By connecting powerful language models like GPT-4o and Claude Sonnet 3.5 to real-world tools, the open-source tool SWE-agent allows them to autonomously perform complex tasks: …
When AI agents go rogue, the fallout hits the enterprise
In this Help Net Security interview, Jason Lord, CTO at AutoRABIT, discusses the cybersecurity risks posed by AI agents integrated into real-world systems. Issues like …
Package hallucination: LLMs may deliver malicious code to careless devs
LLMs’ tendency to “hallucinate” code packages that don’t exist could become the basis for a new type of supply chain attack dubbed …
The quiet data breach hiding in AI workflows
As AI becomes embedded in daily business workflows, the risk of data exposure increases. Prompt leaks are not rare exceptions. They are a natural outcome of how employees use …
Featured news
Resources
Don't miss
- Your dependencies are 278 days out of date and your pipelines aren’t protected
- Security debt is becoming a governance issue for CISOs
- BlacksmithAI: Open-source AI-powered penetration testing framework
- When cyber threats start thinking for themselves
- IronCurtain: An open-source, safeguard layer for autonomous AI assistants