AI set to enhance cybersecurity roles, not replace them

In this Help Net Security interview, Caleb Sima, Chair of CSA AI Security Alliance, discusses how AI empowers security pros, emphasizing its role in enhancing skills and productivity rather than replacing staff.

security pros AI

AI is seen as empowering rather than replacing security pros. How do you foresee AI changing their roles in the future?

While the future of AI replacing jobs remains uncertain, I am confident it’s not imminent. AI is a tool that can be used to empower rather than replace security pros. In fact, a survey – State of AI and Security Survey Report – that CSA recently conducted with Google found that the majority of organizations plan to use AI to strengthen their teams, whether that means enhancing their skills and knowledge base (36%) or improving detection times (26%) and productivity (26%), rather than replacing staff altogether.

In the immediate future, we will see AI leveraged to automate a host of repetitive tasks (such as reporting) across teams. This will free up significant chunks of time currently spent on compiling management reports, for example, and enable those teams to focus on higher-priority work. This aligns with what the survey found, where 58% of respondents indicated that they believe AI will enhance their skill set or generally support their current role. An additional 24% see AI as replacing parts of their job, allowing them to focus on other activities.

Security teams, for instance, can leverage AI algorithms to identify and remediate threats exponentially faster and more effectively than they could by human action alone. Correspondingly, by inputting historical data, security teams can use AI to help predict potential threats and plan mitigation strategies before these threats escalate. Regardless, security experts will need to learn how to best leverage AI in both their organization and their individual roles.

How do security pros perceive their organization’s cybersecurity maturity regarding AI integration?

Integrating AI mostly involves applying standard security measures. A small part of the process addresses novel AI risks. Security professionals often fear this unknown territory until they explore the specifics for their organization.

Regardless, this year is going to be transformative when it comes to companies implementing AI. The survey I mentioned found that more than half of organizations are planning to implement a GenAI solution this year with the C-suite driving this adoption. It also revealed that more than 80% of respondents see their organizations as being moderately to highly mature. What that doesn’t tell us, though, is whether respondents’ perception is grounded in reality.

Ready or not, though, AI is coming. That being the case, I’d caution companies, regardless of where they are on their AI journey, to understand that they will encounter challenges, whether from integrating this technology into current processes or ensuring that staff are properly trained in using this revolutionary technology, and that’s to be expected. As a cloud security community, we will all be learning together how we can best use this technology to further cybersecurity.

There’s a significant awareness of AI’s potential misuse. How should organizations prepare to mitigate these risks?

First, companies need to treat AI with the same consideration as they would a person in a given position, emphasizing best practices. They will also need to determine the AI’s function — if it merely supplies supporting data in customer chats, then the risk is minimal. But if it integrates and performs operations with access to internal and customer data, it’s imperative that they prioritize strict access control and separate roles. Most risks are already mitigable with current controls; the challenge stems from the unfamiliarity with AI, leading to assumptions that we lack safeguards.

How do you view the current state of AI-related cybersecurity training, and what improvements are needed to prepare the workforce?

We’ve been talking about a skills gap in the security industry for years now and AI will deepen that in the immediate future. We’re at the beginning stages of learning, and understandably, training hasn’t caught up yet. AI evolves so rapidly that training materials quickly become outdated. As organizations increasingly look to train their workforce on how to best leverage AI, they should concentrate on stable concepts and emphasize that AI security largely relies on established best practices of current application and infrastructure.

With 74% of organizations planning to create dedicated AI governance teams, how do you see these teams shaping cybersecurity’s future?

AI oversight is critical today given its uncertain implications. Over time, as AI literacy increases and it integrates into all technologies, its risks will become clearer, shifting AI governance from specialized teams to broader tech management.

In the short term, the creation of governance teams speaks to the seriousness with which companies are approaching the integration and management of AI. These teams are likely to be tasked with addressing everything from corporate policy development and ethical considerations to risk management and regulatory compliance. We’re already seeing issues surrounding transparency crop up in the news with respect to images and copy, and that will continue to play out as we move forward. As a society we demand a certain level of trust in the companies and media we interact with so it’s essential that that trust remain unbroken.

Don't miss