IT leaders alarmed by generative AI’s SaaS security implications

IT leaders are grappling with anxiety over the risks of generative AI despite continued confidence in their software-as-a-service (SaaS) security posture, according to Snow Software.

AI SaaS security implications

96% of respondents indicated they were still ‘confident or very confident’ in their organization’s SaaS security measures, and yet, ‘managing the security of SaaS applications’ is the top challenge for IT leaders.

The effects of generative AI

IT leaders must now factor the effects of generative AI, such as ChatGPT, into their overall SaaS security approach. 23% of respondents said generative AI applications are the most concerning SaaS security issue. When asked how IT leaders would feel if a SaaS vendor used generative AI without their knowledge, 57% said they would feel alarmed and would require more information from the vendor.

A recent report from Forrester corroborates this claim, suggesting organizations need to balance the risks of AI, including regulations and intellectual property concerns in order to see the expected return on investment.

“IT leaders have to walk a fine line, between minimizing risk and creating efficiencies, while still realizing growth to propel their businesses during this period of economic turbulence,” said Steve Tait, CTO at Snow.

“The increasing complexities of SaaS with mounting security concerns around generative AI have made the need for IT visibility even more pressing. IT leaders need to govern the unknown as effectively as they do their approved vendors,” Tait continued.

Generative AI is compounding security concerns

While IT leaders who participated in last year’s survey also reported concerns with SaaS security, the main topic driving uncertainty in 2023 is artificial intelligence. The unknown security risks of generative AI are causing worry among IT leaders, more so than risks presented by other technologies.

  • When asked what application types are concerning from a security perspective, 23% of IT leaders said that generative AI applications were the most concerning SaaS security issue, followed by open-source applications (19%) and file sharing applications (17%).
  • 57% of respondents indicated that they would be alarmed if a SaaS vendor was using generative AI without their knowledge, with 36% expressing no concern and only 7% indicated that they would terminate services.
  • The data suggests that IT teams have hesitations over the potential risks posed by their collective applications. In fact, 40% of respondents expressed concern over data protection or privacy, even though 61% indicated that they have extensive data governance and security tools to manage data shared with SaaS applications.

Decentralized leadership adds fuel to the fire

Despite SaaS applications’ prevalence within organizations, there seems to be some disagreement for who is ultimately accountable for SaaS applications.

  • 65% of those surveyed said IT asset management (ITAM)/software asset management (SAM) teams were primarily responsible for purchasing and managing SaaS applications, followed by CIO or IT leaders (58%), security (28%) and procurement or vendor management (20%).
  • US respondents are more likely to say ITAM/SAM professionals are also responsible for mitigating issues related to SaaS (43%) vs UK respondents (34%). However, in the UK, they are more likely to consider the CIO or senior IT leader as responsible (40%).

SaaS spending is still a major concern

The second most important aspect of managing SaaS applications was listed as controlling the total cost of SaaS application investment (39%). Despite this concern, IT leaders are overwhelmingly confident they could quickly and efficiently find savings/areas to optimize, with 90% of all respondents voting yes.

“Mitigating security issues that are caused by unknown or unapproved SaaS application usage is essential, especially given the shift in purchasing power away from central IT to business units and individual employees,” said Tait.

“Gaining visibility is the first step to proactively protecting your organization, especially when using generative AI applications like ChatGPT. Unchecked SaaS sprawl leads to risk and overspend,” he concluded.

Don't miss