AI is rewriting the rules of cyber defense
Enterprise security teams are underprepared to detect new, adaptive AI-powered threats. The study, published by Lenovo, surveyed 600 IT leaders across major markets and shows widespread concern about external and internal risks, along with low confidence in current defenses.
External AI threats gain ground
More than six in ten IT leaders see cybercriminals using AI as a growing risk. AI-enhanced campaigns can adapt to defenses in real time, imitate normal user behavior, and operate across cloud, devices, and applications. Respondents admit that they are not confident they can defend against these techniques, which may include polymorphic malware, deepfake social engineering, and AI-powered brute-force attempts.
The report indicates that AI is accelerating the pace of attacks. Adversaries can generate code, adjust tactics, and exploit systems at machine speed, which shortens the window for detection and response and creates a demand for updated monitoring and analysis models.
Internal AI risks
The survey also shows that AI inside organizations presents its own challenges. Seven in ten IT leaders believe misuse of public AI tools by employees is a serious concern, while more than six in ten say AI agents represent an insider threat they are not ready to handle. Less than 40 percent feel confident managing these risks.
Concerns extend to the protection of AI models, training data, and prompts. As businesses adopt AI solutions, tampering or data poisoning could undermine trust, cause reputational harm, or even expose sensitive information.
Despite this, leaders report feeling somewhat more confident about securing in-house AI development compared with defending against external AI-enabled attacks.
Gaps in defensive capabilities
Confidence in existing tools and processes is low. More than half of respondents believe their data protection measures are not sufficient to counter AI-related threats. Vulnerability analysis, incident detection, response, and identity management also fall short. Between 60 and 70 percent of leaders doubt these areas are fully prepared for AI-era attacks.
The shortcomings reveal the limits of traditional approaches. Measures like role-based data access or signature-based antivirus tools cannot keep up when AI systems scan large datasets or when malware mutates to avoid detection. Security leaders recognize that AI adversaries demand new approaches.
Barriers to progress
While many organizations are already experimenting with AI in cybersecurity, scaling these efforts is difficult. Three major barriers stand out in the survey: complex IT environments, lack of skilled staff, and limited budgets.
Most enterprises rely on a patchwork of legacy and newer systems, making it difficult to integrate AI-powered security tools. Progress is further hindered by a shortage of professionals with expertise in both AI and cybersecurity.
Finally, budget pressures leave some teams reluctant to replace existing tools, even when their features are outdated.
Paths forward
The report outlines practical steps for leaders. These include consolidating telemetry across endpoints, applications, and cloud environments to reduce blind spots and improve visibility. Establishing AI usage policies for employees is necessary, as is securing the AI development lifecycle against manipulation and data leakage.
Training staff to recognize AI-enabled social engineering, such as voice and video impersonation, is another priority. On the technology side, unifying monitoring and adopting AI-based analysis can help defenders keep pace with machine-speed threats.
“AI has changed the balance of power in cybersecurity. To keep up, organizations need intelligence that adapts as fast as the threats. That means fighting AI with AI,” said Rakshit Ghura, VP & GM, Lenovo Digital Workplace Solutions. “With intelligent, adaptive defenses, IT leaders can protect their people, assets, and data while unlocking AI’s potential to drive business forward.”