Security work keeps expanding, even with AI in the mix

Board attention continues to rise, and security groups now operate closer to executive decision making than in prior years, a pattern reflected the Voice of Security 2026 report by Tines. Within that environment, large numbers of teams already rely on AI, automation, and workflow tools as part of routine operations, creating a baseline expectation that AI plays a central role in security work.

AI security workflows

Board-level engagement has grown over the past year, particularly in larger enterprises. Security teams now participate more often in discussions tied to resilience, risk tolerance, and operational continuity. Alignment with broader business objectives still requires sustained effort, especially when teams manage competing priorities such as cloud security, privacy obligations, detection coverage, and incident readiness.

Board visibility rises alongside operational strain

Higher visibility has brought additional scrutiny of outcomes and metrics. Leaders commonly track security spending, compliance posture, training completion, and estimated incident costs. Practitioners focus more on incident volume, vulnerability exposure, and detection speed. This mix reflects expectations that security programs deliver accountability to the business while maintaining technical performance.

Workloads continue to expand. Manual and repetitive tasks still consume a large share of daily time, often stretching across evidence collection, ticket handling, and coordination between tools. This pattern persists even in environments where AI is widely deployed, contributing to fatigue and pressure across operational roles.

AI becomes embedded in everyday security tasks

AI already supports a broad range of security functions. Common use cases include threat intelligence, detection, identity monitoring, phishing analysis, ticket triage, reporting, and compliance documentation. Many teams also rely on AI to assist with developer support, log analysis, and security training activities.

AI-related risk now forms part of the core threat landscape. Data leakage through AI copilots, unmanaged internal AI use, and prompt manipulation rank among top concerns. Internal use cases generate particular attention because they intersect with sensitive data, workflows, and access controls. Third-party AI use and evolving regulatory requirements add further layers of oversight responsibility.

Governance becomes a daily security function

Formal AI policies and governance frameworks now appear across a large share of organizations. Teams with established policies report greater confidence that AI outputs pass through review steps or guardrails before influencing decisions. Governance work spans data handling, access management, auditability, and lifecycle oversight for AI models and integrations.

Security and compliance considerations also affect how quickly teams operationalize automation. Concerns around data protection, regulatory obligations, tool integration, and staff readiness continue to influence adoption patterns. Budget limits and legacy systems remain common constraints, reinforcing the need for governance structures that support day-to-day execution.

Manual work drives burnout and retention risk

Teams managing large tool inventories report higher strain, particularly when workflows require frequent context switching. Leaders increasingly view automation and tooling improvements as key levers for retaining staff. Practitioners consistently place work-life balance and meaningful impact at the center of retention decisions.

Manual work also introduces operational risk. Human error compounds during repetitive processes, and limited capacity restricts response speed during incidents. Automation and orchestration offer opportunities to reduce repetitive tasks and stabilize operations, especially when workflows connect tools and people through defined processes.

Intelligent workflows gain attention

Many teams express interest in workflow platforms that connect automation, AI, and human review within a single operational layer. These approaches focus on moving work across systems without constant manual handoffs. Respondents associate connected workflows with higher productivity, faster response times, improved data accuracy, and stronger compliance tracking.

Interoperability also plays a growing role. Security teams increasingly consider standardized frameworks and APIs that allow AI systems to interact with tools under controlled conditions. This trend reflects an effort to embed AI into operational processes.

“AI alone won’t fix broken security operations. Teams see its enormous potential for time savings and morale gains, but without strong governance and well-designed workflows, that potential remains out of reach,” said Thomas Kinsella, chief customer officer at Tines.

Download report: Voice of Security 2026

Don't miss