prompt injection
Microsoft details AI prompt abuse techniques targeting AI assistants
Prompt abuse occurs when crafted inputs manipulate an AI system into producing unintended behavior, such as attempting to access sensitive information or overriding built-in …
DeepTeam: Open-source LLM red teaming framework
Security teams are pushing large language models into products faster than they can test them, which makes any new red teaming method worth paying attention to. DeepTeam is an …
How attackers use patience to push past AI guardrails
Most CISOs already assume that prompt injection is a known risk. What may come as a surprise is how quickly those risks grow once an attacker is allowed to stay in the …
Shadow AI: New ideas emerge to tackle an old problem in new form
Shadow AI is the second-most prevalent form of shadow IT in corporate environments, 1Password’s latest annual report has revealed. Based on a survey of over 5,000 …
AI agents can leak company data through simple web searches
When a company deploys an AI agent that can search the web and access internal documents, most teams assume the agent is simply working as intended. New research shows how …
Stealthy attack serves poisoned web pages only to AI agents
AI agents can be tricked into covertly performing malicious actions by websites that are hidden from regular users’ view, JFrog AI architect Shaked Zychlinski has found. …
Microsoft: “Hack” this LLM-powered service and get paid
Microsoft, in collaboration with the Institute of Science and Technology Australia and ETH Zurich, has announced the LLMail-Inject Challenge, a competition to test and improve …
Featured news
Resources
Don't miss
- Claude helps researcher dig up decade-old Apache ActiveMQ RCE vulnerability (CVE-2026-34197)
- Acrobat Reader zero-day exploited in the wild for many months
- AI agent intent is a starting point, not a security strategy
- Asqav: Open-source SDK for AI agent governance
- BlueHammer: Windows zero-day exploit leaked