Organizations go ahead with AI despite security risks

AI adoption remains sky high, with 54% of data experts saying that their organization already leverages at least four AI systems or applications, according to Immuta. 79% also report that their budget for AI systems, applications, and development has increased in the last 12 months.

AI adoption security challenges

The AI Security & Governance Report surveyed nearly 700 engineering leaders, data security professionals, and governance experts on their outlook for AI security and governance.

AI adoption security challenges

However, this adoption also carries massive uncertainty. For example, 80% of data experts agree that AI is making data security more challenging. Experts expressed concern around the inadvertent exposure of sensitive data by LLMs and adversarial attacks by malicious actors via AI models.

In fact, 57% of respondents have seen a significant increase in AI-powered attacks in the past year.

While rapid AI adoption is certainly introducing new security challenges, the optimism around its potential is pushing organizations to adapt. Data leaders believe, for example, that AI will enhance current security practices, for tasks such as AI-driven threat detection systems (40%) and the use of AI as an advanced encryption method (28%).

With these benefits looming in the face of security risks, 83% of organizations are updating internal privacy and governance guidelines, and taking steps to address the new risks:

  • 78% of data leaders say that their organization has conducted risk assessments specific to AI security.
  • 72% are driving transparency by monitoring AI predictions for anomalies.
  • 61% have purpose-based access controls in place to prevent unauthorized usage of AI models.
  • 37% say they have a comprehensive strategy in place to remain compliant with recent and forthcoming AI regulations and data security needs.

“Current standards, regulations, and controls are not adapting fast enough to meet the rapid evolution of AI, but there is optimism for the future,” said Matt DiAntonio, VP of Product Management at Immuta. “The report clearly outlines a number of AI security challenges, as well as how organizations are looking to AI to help solve them. AI and machine learning are able to automate processes and quickly analyze vast data sets to improve threat detection, and enable advanced encryption methods to secure data.

“As organizations mature on their AI journeys, it is critical to de-risk data to prevent unintended or malicious exposure of sensitive data to the AI models. Adopting an airtight security and governance strategy around generative AI data pipelines and outputs is imperative to this de-risking,” concluded DiAntonio.

Strong confidence in AI data security strategies

Despite so many data leaders expressing that AI makes security more challenging, 85% say they’re somewhat or very confident that their organization’s data security strategy will keep pace with the evolution of AI.

In contrast to research just last year that found 50% strongly or somewhat agreed that their organization’s data security strategy was failing to keep up with the pace of AI evolution, this indicates that there’s a maturity curve and many organizations are plowing ahead on AI initiatives despite the risks as the expected payoff is worth it.

The rapid changes in AI are understandably exciting, but also unknown. This is especially true as regulations are fluid and many models lack transparency. Data leaders should pair their optimism with the reality that AI will continue to change — and the goalposts of compliance will continue to move as it does.

No matter what the future of AI holds, one action is clear: there is no responsible AI strategy without a data security strategy. Establish governance that supports a data security strategy that isn’t static, but rather one that dynamically adapts as innovation delivers results for the business.

Don't miss