How AI is reshaping the cybersecurity job landscape

88% of cybersecurity professionals believe that AI will significantly impact their jobs, now or in the near future, and 35% have already witnessed its effects, according to ISC2’s AI study, AI Cyber 2024.

AI cybersecurity jobs

Impact of AI on cybersecurity professionals

While there is considerable positivity about the role of AI in dealing with cyberattacks, these findings also recognize the urgent demand from professionals for industry preparedness to mitigate cyber risks and safeguard the entire ecosystem.

The survey respondents are highly positive about the potential for AI. Overall, 82% agree that AI will improve their job efficiency as cybersecurity professionals. That is countered by 56%, noting that AI will make some parts of their job obsolete.

Again, the obsolescence of job functions isn’t necessarily a negative, but rather noting the evolving nature of the role of people in cybersecurity in the face of rapidly evolving and autonomous software solutions, particularly those charged with carrying out repetitive and time-consuming cybersecurity tasks.

75% of respondents are moderately to extremely concerned that AI will be used for cyberattacks or other malicious activities. Deepfakes, misinformation and social engineering are the top three concerns for cyber professionals.

AI governance in focus

There is a growing disparity between AI expertise and the level of preparedness to navigate these concerns. 60% feel confident in their ability to lead their organization’s secure adoption of AI. 41% of participants have minimal or no expertise in securing AI and ML technology. 82% feel there is a need for comprehensive and specific regulations governing AI’s safe and ethical use.

Despite the concerns, only 27% of participants said their organization has formal policies in place on the safe and ethical use of AI, with 39% stating their organization is currently discussing a formal policy. When asked ‘who should regulate AI’s safe and ethical use,’ the study revealed that cyber professionals hope for global coordination between national governments and a consortium of AI experts.

“Cybersecurity professionals anticipate both the opportunities and challenges AI presents, and are concerned their organizations lack the expertise and awareness to introduce AI into their operations securely,” said ISC2 CEO Clar Rosso, CC. “This creates a tremendous opportunity for cybersecurity professionals to lead, applying their expertise in secure technology and ensuring its safe and ethical use. In fact, ISC2 has developed AI workshops to foster the expert-led collaboration the cybersecurity workforce needs to address this challenge.”

Respondents were clear that governments need to take more of a lead if organizational policy is to catch up, even though 72% agreed that different types of AI will need tailored regulations. Regardless, 63% said regulation of AI should come from collaborative government efforts (ensuring standardization across borders) and 54% wanting national governments to take the lead.

The survey revealed that 12% of respondents said their organizations had blocked all access to generative AI tools in the workplace.

AI is everywhere, and while the cybersecurity industry was quick to adopt AI and ML as part of its latest generation of defensive and monitoring technologies, so too have the bad actors, who are leaning on the same technology to elevate the sophistication, speed, and accuracy of their own cybercrime activities.

Don't miss