AIBOMs are the new SBOMs: The missing link in AI risk management
In this Help Net Security interview, Marc Frankel, CEO at Manifest Cyber, discusses how overlooked AI-specific risks, like poisoned training data and shadow AI, can lead to security issues that conventional tools fail to detect. He explains how AI Bills of Materials (AIBOMs) extend SBOMs to provide transparency into datasets, model weights, and third-party integrations, improving governance and incident response.
Frankel also outlines the steps organizations must take to achieve executive-grade visibility and maintain AI supply chain hygiene.
Can you walk us through a real-world example where AI-specific risks, such as poisoned training data, model drift, or shadow AI use, were overlooked by conventional tooling but later surfaced as a significant business or security issue?
Absolutely. The LAION-5B situation is a pretty awful example of this. LAION-5B is a massive 5.85 billion image-text dataset that became the foundation for hugely popular models like Stable Diffusion, DALL-E 2, and countless other image generation systems that enterprises are using today. Stanford researchers discovered that LAION-5B contained approximately 1,600 instances of child sexual abuse material that had been scraped from the internet without proper filtering. The terrifying part is that this wasn’t just a dataset problem. Every AI model trained on LAION-5B potentially inherited this poisoned data, and traditional security scanning tools had absolutely no way to detect it.
What made this worse is that when the LAION-5B issue surfaced, most companies had no systematic way to trace which of their applications were affected. They couldn’t answer basic questions like “Do we use Stable Diffusion anywhere in our organization?” or “Which of our applications might be impacted?” Most organizations couldn’t even identify which applications contained affected models, let alone trace the data lineage back to the original training set.
How do AIBOMs differ in structure and function from software SBOMs? What components, such as training datasets, model weights, or third-party APIs, are most critical for CISOs to track? Could you share a use case where an AIBOM materially improved transparency or incident response?
AIBOMs follow the same formats as traditional SBOMs, but contain AI-specific content and metadata, like model family, acceptable usage, AI-specific licenses, etc. If you are a security leader at a large defense contractor, you’d need the ability to identify model developers and their country of origin. This would ensure you are not utilizing models originating from near-peer adversary countries, such as China.
What kinds of competitive or operational benefits are early adopters experiencing, especially in sectors like finance, healthcare, or critical infrastructure? How are they leveraging AI compliance maturity in procurement, partnerships, or product differentiation?
There are several. First and foremost is significantly reducing the time and effort required to approve models. In a typical enterprise today, if a developer wants to use a new model, the request goes through a review process for approval that can take weeks. This review process must be conducted by a highly experienced and valuable team or team member. Utilizing AIBOMs and enforcing corporate policies automatically turns that into a two-click problem.
Second is governance. Early AIBOM adopters can answer questions like, “Do we use DeepSeek anywhere?” or “Do I have an intellectual property risk associated with my models?” in seconds. Legislation worldwide, from the EU’s AI Act to California’s Assembly Bill 2013 and, most recently, the draft National Defense Authorization Act (NDAA), is pushing AI vendors towards providing transparency around their AI applications.
Finally, the third is security. When there is a LAION-5B type scenario, or DeepSeek, or the inevitable next AI security threat, the mean times to patch and mean times to remediation for AIBOM adopters will be orders of magnitude lower than those of enterprises without AI inventories.
What does “executive-grade visibility” look like when it comes to deployed AI systems? What metrics, dashboards, or governance artifacts should CISOs be pushing for to help board-level stakeholders understand risk exposure and accountability?
Executive-grade AI visibility enables boards to obtain clear answers to business-critical questions in real-time, not weeks. Start with the fundamentals:
- “Do any of our models come from China, Russia, or Iran?”
- “Is any of our AI built on legacy models that are no longer supported?”
- “Do we have the legal right to use these models and datasets in our industry?”
- “Does any of our AI ship with software vulnerabilities?”
- “Where’s the complete inventory of all our models and datasets?”
Once you have those fundamentals down, then it’s time to start asking higher-order questions like:
- “What’s our process for identifying shadow AI in our developer code?”
- “What’s our compliance process with these emerging regulations?”
- “What would our response time be if we discovered a poisoned dataset or model?”
Most boards can’t answer these basic questions today because they lack transparency in AI. The organizations that can answer them quickly have a massive competitive advantage. They can deploy AI more quickly, respond to incidents precisely, and maintain regulatory compliance without disrupting their operations.
The difference between executive-grade visibility and technical reporting is simple: executives need to understand business impact and risk prioritization, not vulnerability counts.
What steps can organizations take to surface and manage hidden AI assets? What role does inventorying and classification play in AI supply chain hygiene?
The first step is inventorying their AI. Utilize AIBOMs to inventory your AI dependencies, monitor what is approved vs. requested vs. denied, and ensure you have an understanding of what is deployed where.
The second is to actively seek out AI, rather than waiting for employees to discover it. Organizations need capabilities to identify AI in code and automatically generate resulting AIBOMs. This should be integrated as part of the MLOps pipeline to generate AIBOMs and automatically surface new AI usage as it occurs.
The third is to develop and adopt responsible AI policies. Some of them are fairly common-sense: no contributors from OFAC countries, no copylefted licenses, no usage of models without a three-month track record on HuggingFace, and no usage of models over a year old without updates. Then, enforce those policies in an automated and scalable system.
The key is moving from reactive discovery to proactive monitoring. Most organizations only discover AI usage when something goes wrong or during manual audits. By that point, you’re already exposed to risk. Automated detection and policy enforcement transform AI governance from a compliance exercise into operational intelligence, enabling faster and safer AI adoption.