Henkel CISO on the messy truth of monitoring factories built across decades
In this Help Net Security interview, Stefan Braun, CISO at Henkel, discusses how smart manufacturing environments introduce new cybersecurity risks. He explains where single points of failure hide, how attackers exploit legacy systems, and why monitoring must adapt to mixed-generation equipment. His insights show why resilience depends on visibility, autonomy, and disciplined vendor accountability.

What’s the most common architectural pattern you see that inadvertently creates a “single point of operational failure” in smart manufacturing environments?
The most frequent and overlooked single point of failure is the reliance on a single, nonredundant engineering workstation, followed closely by the growing dependency on external cloud connectivity.
On the factory floor, it is common to find a solitary engineering workstation that holds the only up-to-date copies of critical logic files, proprietary configuration tools, and project backups. If that specific computer suffers a hardware failure or is compromised by ransomware, the maintenance team loses the ability to diagnose errors or recover the production line. The entire manufacturing process becomes dependent on a single, often unmanaged, desktop computer.
Additionally, as factories modernize, there is a dangerous architectural shift where local production sites rely on cloud-based SaaS platforms for real-time instructions or user authentication. If the internet connection is severed, or if the third-party cloud provider suffers an outage, the equipment on the floor stops working. This architecture fails because it prioritizes connectivity over local autonomy, creating a fragile ecosystem where a disruption in a remote cloud environment creates a “digital brick” out of physical machinery.
If you had to map out an adversary’s ideal kill chain inside a smart factory, what step do you think is currently the easiest for them, and why?
Once an adversary establishes a foothold inside the perimeter, the easiest step is exploiting the massive technical debt inherent in unmanaged legacy assets and auxiliary Internet of Things devices.
While the core safety controllers might be locked down, the internal network is often littered with “soft” targets that allow for rapid lateral movement. This includes human-machine interfaces running obsolete operating systems like Windows 7 or XP, networked security cameras with default passwords, and smart sensors that have never received a firmware update.
This step is the path of least resistance because these devices act as trusted insiders. An attacker does not need sophisticated “zero-day” exploits to compromise a fifteen-year-old human-machine interface, they often just need publicly known vulnerabilities that will never be fixed by the vendor. By compromising a peripheral camera or an outdated visualization node, they gain a persistence mechanism that security teams rarely monitor, allowing them to map the operational technology network and prepare for a disruptive attack on the critical control systems at their leisure.
Many factories run decades-old PLCs alongside cloud-native MES platforms. What does monitoring look like in an environment where telemetry quality varies by an order of magnitude?
Effective monitoring here requires accepting heterogeneity as a fundamental design parameter and moving away from the idea of a “single pane of glass” toward a tiered visibility model. One cannot expect a thirty-year-old programmable logic controller to output rich data logs, so detection strategies must be layered:
High-fidelity tier: For modern assets, specifically “IT-in-OT” systems like human- machine interfaces, engineering workstations, and servers, security teams should deploy active agents like endpoint detection and response software. These provide visibility into process execution and file changes, exactly as they do in the corporate environment.
Passive network tier: For the vast majority of operational assets (programmable logic controllers, drives, and legacy input/output devices) that cannot support software agents, the standard is network detection and response. This treats the network traffic as the source of truth, analyzing communication flows for anomalies, new connections, or unexpected commands (like a “Stop” instruction sent during production).
The goal is correlated situational awareness. Success occurs when analysts can link a high-fidelity alert from a workstation with a passive network anomaly on a controller, providing a complete picture of the attack path despite the varying ages of the technologies involved.
Factories rarely shut down. What does realistic tabletop testing look like when almost nothing can come offline?
Since one cannot simply turn off a machine on a live line, the most valuable tabletop exercises focus on “war-room choreography.” Facilitators simulate the symptoms of an attack, reporting that the manufacturing execution system is encrypted or safety systems are unreachable, to test the human response. The goal is to stress-test the escalation matrix: Who has the authority to order a full manual shutdown? How does the plant communicate with executive leadership when email systems are compromised? Are the legal and public relations templates ready for a regulatory disclosure?
Smart factories depend heavily on vendor firmware and integrator code. What’s a due-diligence question you wish more CISOs asked suppliers but rarely do?
A critical question for CISOs to ask is: “Can you provide a continuously updated Software Bill of Materials for your firmware, and what is your specific process for mitigating vulnerabilities in embedded third-party libraries?”
Traditional due diligence focuses on the vendor’s corporate security, do they have firewalls, do they perform background checks? But in a smart factory, the risk is often in the code inside the code. A new programmable logic controller might run a web server using an open-source library that has not been updated in five years. When a major vulnerability is discovered in that library, the security team needs to know immediately which devices on the
floor contain that specific component.
Asking for a Software Bill of Materials shifts the conversation from “Do you secure your building?” to “Do you know the ingredients in your product?” It forces the supplier to acknowledge their supply chain risks and proves whether they have the maturity to manage the lifecycle of the open-source dependencies they are selling.

Report: The State of Threat Exposure Management