AI threats leave SecOps teams burned out and exposed
Security teams are leaning hard into AI, and fast. A recent survey of 500 senior cybersecurity pros at big U.S. companies found that 86% have ramped up their AI use in the past year. The main reason? They’re trying to keep up with a surge in AI-powered attacks.
But even as AI tools help with tasks like threat detection and data analysis, the pressure on security teams is getting worse. Nearly 70% of respondents say AI and other emerging technologies are actually contributing to burnout.
The findings come from the 2025 edition of the Voice of SecOps report, based on responses from organizations in finance, technology, manufacturing, healthcare, government, and critical infrastructure.
AI helps, but adds pressure
The survey paints a mixed picture. On one hand, AI is making routine security operations easier. Three-quarters of organizations use GenAI tools in some part of their security workflow, with common use cases including data analysis, customer service, and threat detection.
These tools are saving time. On average, organizations report reclaiming 12 hours per week thanks to AI-powered automation. For large enterprises, the savings jump to 15 hours weekly.
But these benefits come with a cost. The speed of technological change is hard to keep up with, especially for teams that are already stretched thin. As a result, the same tools meant to reduce workload are also contributing to fatigue. The report found that nearly 7 in 10 professionals blame AI and related technologies for increasing stress and burnout in their teams.
AI arms race: Attackers are winning
The surge in AI usage isn’t limited to defenders. Attackers are also stepping up their game, using AI to automate and scale up their efforts. In the past year, 38% of organizations reported falling victim to an AI-powered cyberattack.
These attacks aren’t just theoretical. The report shows that AI-enabled breaches have led to stolen data, financial losses, and reputational damage. In critical infrastructure sectors such as energy, utilities, and transportation, the numbers are even worse. Half of the organizations in this group experienced AI-driven attacks.
“The imbalance in cybersecurity today clearly favors attackers,” Carl Froggett, CIO at Deep Instinct, told Help Net Security. “They operate with minimal risk of repercussions, even from law enforcement, and are leveraging unregulated AI and large language models stripped of built-in safeguards.”
That freedom allows attackers to quickly adopt and weaponize the latest tools, gaining a clear edge. Meanwhile, defenders are left trying to play catch-up with fewer resources. Many have already poured significant money into AI and ML systems, training, and integration, which makes rapid course correction difficult.
“SecOps teams are tasked with maintaining day-to-day defenses and rolling out new strategies, all while dealing with staffing shortages and resource constraints,” Froggett added. “In regulated industries, compliance requirements create another hurdle, often slowing down adoption of newer technologies.”
The result is a widening gap between what attackers can do and how fast defenders can respond.
Awareness lags in key sectors
The public sector and healthcare organizations reported fewer AI-driven incidents, but the report warns that low levels of concern in these industries may reflect a lack of awareness, not reduced risk.
AI is also fueling a rise in specific threats. Almost half of organizations say they’ve seen more targeted phishing attempts in the past year. Deepfake impersonations are also becoming more common, with attackers mimicking CEOs and other senior leaders to trick employees.
Cloud storage is another growing risk. Nearly 83% of respondents say they’re concerned about malicious files being uploaded to storage platforms. Despite the serious consequences, some threats are still being underestimated. For instance, zero-day attacks were ranked among the lowest concerns, even though they often bypass traditional defenses and are becoming harder to detect.
Gaps in AI understanding
Another issue highlighted in the report is confusion around what AI actually means. While most organizations have adopted AI tools in some way, many security pros struggle to define key concepts like deep learning and machine learning. Only 28% of respondents could accurately explain the relationship between the two. That knowledge gap is even wider in industries like critical infrastructure, which also reported the highest rate of AI-powered attacks.
This lack of clarity could have real-world consequences. Understanding how different types of AI work is important for choosing the right tools and preparing for the next generation of threats.
Moving toward prevention
With both the volume and sophistication of attacks rising, more than 80% of organizations have shifted their focus toward prevention this year, up from 73% in 2024.
The move is driven in part by leadership. Two-thirds of respondents say their boards or C-suites are urging a prevention-first approach, one that stops threats before they reach the network rather than reacting after the damage is done.
To prepare for AI-driven threats, most organizations are investing in new technologies, expanding external partnerships, or developing internal security capabilities. Only 2% say they’re doing nothing.