Why IAM should be the starting point for AI-driven cybersecurity
In this Help Net Security interview, Benny Porat, CEO at Twine Security, discusses applying AI agents to security decisions. He explains why identity and access management (IAM) is the ideal starting point for both augmentation and automation, and shares advice on building trust in AI agents and integrating them into existing workflows.
Which cybersecurity functions or domains are best suited for AI augmentation vs. full automation? How should CISOs think about that division of labor?
As identity is one of the most critical attack surfaces, I believe that identity and access management (IAM) is the cybersecurity domain that companies should start with, both for AI augmentation and full automation. It’s not one or the other.
A healthy process for organizations would be to start working with the AI agent on augmentation and then move to full automation after trust is gained and the agent has learnt your specific organization needs and edge cases. High-volume, low-complexity tasks like identity hygiene, account ownership verification, and routine IAM workflows are good examples of where to start, and then move up to more complex scenarios requiring human judgment such as remediating audit findings, stale account identification and clean up, and user access reviews (UARs), where AI will help accelerate processes while still keeping humans still in the loop.
How should CISOs restructure their security teams to integrate AI agents? Are we talking augmentation of existing roles, or the emergence of entirely new ones?
When a new domain in cybersecurity emerges, often we see the same thing happening: first, organizations and security teams form a new team that is focused on gaining expertise, so new roles are created. This happened in many organizations when cloud security became an entire segment.
After a while, you see consolidation, and the existing cyber teams and the new team combine, then everyone has augmented capabilities (for instance, security aspects and supervision on the AI agents). I foresee this will happen with agentic AI as well, as they will fundamentally transform how security professionals operate.
Should AI-driven decisions in cybersecurity be auditable in the same way that human decisions are? If so, what are the technical requirements to make that happen?
AI-driven decisions in cybersecurity should be even more auditable than human decisions, and they already are by design.
Let’s look at a different industry: the technology for autonomous cars existed for a while, but it took people more time than initially thought to adopt it. Part of the reason was the fact that every accident by an autonomous car got coverage, and the entire industry stopped, while human accidents didn’t get the same attention.
Now let’s go back to AI agents – unlike human decision-making that relies on memory and subjective recollection, AI agents create complete, immutable audit trails capturing every decision point, data input, logical step, and action taken throughout the entire process. We have the complete history of every decision point and action of an AI agent, which is impossible with human decisions where thought processes often go undocumented.
The technical requirements include comprehensive logging systems that capture entire decision trees with all data sources and weights applied, explainable AI frameworks providing reasoning for every action, immutable audit logs, and decision provenance tracking that maps every action back to its root cause. This transforms compliance from reactive documentation into proactive capability where every AI-driven security decision is automatically documented, justified, and available for immediate review.
What metrics or KPIs are most useful when assessing the performance and ROI of a human-AI team model in cybersecurity?
I often suggest teams start with a project that is so important, that they would’ve done it anyways, even if we didn’t have advanced AI agents technology available. If this is a project you would have done regardless, you should be able to have a scorecard of sort, knowing the time it would have taken, and the quality you seek in each aspect of the project.
Then, once you have this, bring an AI agent to do the exact same project, and see the ROI in the significant time saved, resources and energy saved and of course – money saved. Quality can be a bit tricky to measure and we see it being used by some AI tech suppliers in the market, so it’s important to set the bar right from the get go at the beginning of the project. Trust in the AI agent is key, and as the AI agents learn the organization and its unique needs, they perform much better.
What are the best practices for integrating AI into existing SIEM, SOAR, and EDR workflows without creating more alert fatigue or operational friction?
In the SOC world, one of the biggest challenges is the volume. If you let AI simply ‘deal with the volume’ – it’s risky, because it’s often very hard to assess the quality of their work in such a scenario. Personally, because of that, I wouldn’t use SOC as the first use case in the organization, and this is part of the reasons why we start with IAM. I think it is so important to be able to verify quality where you want to go fully autonomous with an AI agent.
Therefore, if I must give a tip to CISOs for such workflows – as a first step, organizations should focus on implementing agentic AI in such workflows in a way that enriches the context of their existing alerts and receive suggestions that can move the needle for them.