skip navigation
skip mega-menu

Data and AI: when decision rights matter more than models

Capability has moved faster than decision making

Most organisations now have access to data platforms, analytics tooling, and increasingly capable AI. The technical barrier to entry has fallen quickly. 

What has not moved at the same pace is clarity about how decisions are made. Who is allowed to change a decision. When a human can override a model. What evidence is required to justify doing so. 

In consumer and supply environments, where conditions change quickly and tolerance for error is low, this gap becomes visible early. 


The pressure shows up where decisions matter most

A typical pressure moment is not an AI failure. It is a trading period where demand patterns shift, and the organisation needs to respond quickly. 

For example, a promotion performs differently than expected across regions or channels. Substitutions rise. Availability becomes uneven. Replenishment and pricing recommendations start to diverge from what operational teams are seeing on the ground. 

At that point the most important questions are often: 

  • who is allowed to override the recommendation 
  • what threshold triggers an override 
  • who carries the risk if the override makes things worse 
  • how that decision will be evidenced and reviewed later 

If those answers are unclear, decisions slow, confidence drops, and manual work increases. The model may be functioning correctly. The organisation is not. 


Trust in data is situational, not absolute 

Data quality is often discussed as a technical issue. In practice, trust in data is contextual. 

Data that is good enough during steady conditions can become unreliable during peaks, promotions, or supply disruption. Late updates, partial feeds, and manual corrections all affect confidence. When operational teams do not trust the underlying data, automated recommendations become harder to accept even when they are broadly right. 

This is why data and AI programmes often fall back to manual processes during the very periods when they were expected to help most. It is not irrational. It is a practical response to uncertainty. 


Most AI programmes stall at the point someone has to be accountable

Many AI initiatives reach a familiar point. The model works. The outputs are plausible. The pilot demonstrates potential. 

Then progress slows. 

What usually sits behind that slowdown is uncertainty at the decision boundary. The hardest questions are not technical: 

  • who is accountable for acting on the recommendation 
  • what happens when the model and local judgement disagree 
  • how overrides are recorded and reviewed 
  • how outcomes are evaluated after the fact 

This is where AI meets the operating model. Until these questions are resolved, AI remains advisory rather than operational. 


The proof problem becomes more important, not less

As we discussed in relation to auditability and traceability, many organisations do not have an information problem. They have a proof problem. 

AI increases the importance of proof because it increases decision volume and decision speed. When outcomes are questioned, the organisation needs to be able to show what informed the recommendation, whether it was accepted or overridden, and who authorised the final call. 

In regulated consumer and supply environments, it is not enough to say the model suggested it. The organisation needs to be able to evidence the decision path. 

Without that, AI use cases tend to remain confined to low impact areas where scrutiny is limited. 


Where AI fails in practice 

When AI does not translate into outcomes, the failure is usually visible in a few recurring places: 

  • overrides happen informally, with no traceable record 
  • teams do not agree what “good” looks like, so value cannot be judged 
  • exceptions multiply and manual intervention becomes the operating model 
  • escalation routes are unclear when time is limited 

In other words, AI often exposes the same underlying issues we see in operating models and sourcing. Accountability fragments. Decision cadence slows. Coordination cost rises. 


Product models and AI can add ambiguity if not designed carefully

Product oriented operating models are often used to bring business and technology closer together. In the context of AI, they can also introduce new ambiguity. 

Product teams may be accountable for outcomes, but dependent on shared data platforms and models. Platform teams may own the capability, but not the business decision. Operational teams carry consequences. 

If decision rights are unclear, AI increases coordination rather than reducing it. More insight is generated, but responsibility becomes harder to pin down. This is not a failure of the model. It is a design choice that needs to be made explicit. 


Why this matters before scaling AI 

Scaling AI without resolving decision rights, trust, and proof usually increases risk. 

Automated decisions move faster than governance. Errors propagate more quickly. Exceptions multiply. Manual intervention grows, often outside formal processes. The organisation becomes more dependent on informal workarounds at exactly the moment it is trying to industrialise. 

This is why many AI initiatives plateau. Not because the technology cannot scale, but because the organisation cannot stand behind decisions at scale. 


Where this leads next 

As AI becomes operational, it becomes part of resilience. 

When something goes wrong, the organisation needs to respond quickly, recover cleanly, and explain what happened with confidence. That capability depends on clear roles during incidents, strong recovery discipline, and a realistic understanding of third-party dependencies. 

This is why discussions about data and AI inevitably lead into resilience and cyber. Not because they are the same, but because they are tested under the same conditions, during peaks, incidents, and scrutiny. 

Understanding that link matters before relying on AI in environments where pressure is constant and public. 

If you would like to speak to me regarding this insight, send your enquiry to contact@masonadvisory.com 

If you want to find out more about our services, click here. 

Subscribe to our newsletter

Sign up here