
Thus, he said, companies should set up a business risk program with a governing body that defines and manages those risks, monitoring AI for behavior changes.
Reframe how AI is managed
Sanchit Vir Gogia, chief analyst at Greyhound Research, said addressing this problem requires executives to first reframe the structural questions.
“Most enterprises still talk about AI inside operational environments as if it were an analytics layer, something clever sitting on top of infrastructure. That framing is already outdated,” he said. “The moment an AI system influences a physical process, even indirectly, it stops being an analytics tool, it becomes part of the control system. And once it becomes part of the control system, it inherits the responsibilities of safety engineering.”
He noted that the consequences of misconfiguration in cyber physical environments differ from those in traditional IT estates, where outages or instability may result.
“In cyber physical environments, misconfiguration interacts with physics. A badly tuned threshold in a predictive model, a configuration tweak that alters sensitivity to anomaly detection, a smoothing algorithm that unintentionally filters weak signals, or a quiet shift in telemetry scaling can all change how the system behaves,” he said. “Not catastrophically at first. Subtly. And in tightly coupled infrastructure, subtle is often how cascade begins.”
He added: “Organizations should require explicit articulation of worst-case behavioral scenarios for every AI-enabled operational component. If demand signals are misinterpreted, what happens? If telemetry shifts gradually, how does sensitivity change? If thresholds are misaligned, what boundary condition prevents runaway behavior? When teams cannot answer these questions clearly, governance maturity is incomplete.”





















