AI Won’t Fix What Your System Keeps Producing

I was recently listening to The Curiosity Shop with Brené Brown and Adam Grant. I’m a fan of both, and what caught my attention was their discussion of systems thinking and the iceberg model — the idea that what we see on the surface is often only a small part of what is really driving outcomes.

It immediately took me back to my early days in quality and transformation work, including the privilege of working with Dr. W. Edwards Deming before he passed away. Long before AI, before digital transformation, and before today’s language of “operating models,” Deming and the TQM movement taught us a lesson leaders are still relearning:

Visible problems are rarely the whole problem.

In Lean Six Sigma, we later described this through the “cost of poor quality” iceberg. Management often sees the visible issues: missed targets, customer complaints, long cycle times, audit findings, defects, and delays. But below the waterline sits the hidden factory — rework, handoffs, workarounds, unclear ownership, poor data quality, inspection loops, and decisions made too late or in the wrong place.

That hidden work is often where the real cost lives.

And today, AI is making this old lesson newly urgent.

We are seeing a modern version of the productivity paradox. AI is everywhere — in pilots, tools, budgets, and boardroom conversations — but measurable enterprise impact remains uneven. McKinsey’s 2025 State of AI report found that only about 6% of respondents qualified as AI high performers, defined as organizations attributing at least 5% EBIT impact to AI use and reporting significant value from AI. These high performers are more likely to redesign workflows, scale faster, and use AI for transformation rather than simple efficiency.

That should tell leaders something important: the constraint is not just the technology. It is the system the technology is being placed into.

The Iceberg Is Still There

Most organizations are not short on visible problems.

They see slow processes, rising costs, frustrated customers, employees overwhelmed by manual work, too many handoffs, poor data quality, and technology that is underused or misused.

Naturally, leaders ask: Can AI help us solve this?

And often, the answer is yes.

But that is not the first question I would ask.

The better question is:

What system are we asking AI to operate inside?

Because if the system is fragmented, unclear, over-controlled, poorly measured, or full of rework, AI may not solve the problem. It may simply make the broken system move faster.

From Events to Patterns to Structures

That is where systems thinking becomes so valuable.

The iceberg model gives leaders a way to slow down and look below the surface of a problem. What we notice first is usually the event: the missed deadline, the customer complaint, the failed rollout, the employee resistance, the low adoption rate. Those events matter, but they are usually symptoms, not explanations.

Take a common example in today’s workplace: leaders roll out a new AI tool and then conclude, a few months later, that “employees are not using it.” That is the event. It is visible, measurable, and easy to react to. But by itself, it does not tell us very much.

The next question is whether that event is part of a broader pattern. Did usage spike right after launch and then fade? Is adoption limited to a small group of early adopters while most employees continue working the old way? Are the same functions or teams repeatedly struggling to integrate AI into their routines? Once we start looking for repetition over time, the issue becomes less about one disappointing outcome and more about a pattern in how the organization absorbs change.

Below that pattern sit the structures that make it likely to continue. Perhaps the workflow was never redesigned. Perhaps employees were trained on the tool, but not shown how to use it in the context of their actual work. Perhaps the data behind the tool is weak, the governance unclear, or the incentives still reward the old way of working. In some organizations, leaders encourage experimentation while managers, often unintentionally, punish the time it takes to learn.

And beneath those structures are the mental models — the assumptions people are carrying, often without saying them out loud. “If we give people access, they will use it.” “Technology adoption is mostly a training issue.” “AI is either a magic shortcut or a threat to quality.” “Real productivity means doing the work myself.”

That is the systems-thinking move. Instead of reacting only to the surface event, leaders ask what pattern this is part of, what structures are sustaining it, and what assumptions are sitting underneath. Only then can they begin to redesign the conditions that produce the behavior.

The TQM and Lean Lesson

TQM, Lean, Six Sigma, and Business Process Management were all built on a similar idea:

To improve performance, you have to understand the system.

You cannot inspect quality into a product at the end. You cannot Lean a process by automating every step. You cannot reduce variation if you do not understand the causes. You cannot improve flow if no one owns the end-to-end process.

In one Lean Six Sigma training example I used years ago, a process appeared to have high customer satisfaction from the outside. But when we looked deeper at rolled-throughput yield, the true process performance was dramatically lower because rework was happening throughout the process. On the surface, the customer saw a completed outcome. Underneath, the organization was paying for complexity, delay, and waste.

That is the hidden factory.

And every organization has one.

The same idea now applies to AI. A team may save time on a specific task, but if the workflow is not redesigned, the capacity is not redeployed, and the measures do not change, the benefit may never reach the customer or the P&L. The organization may feel more “AI-enabled,” while the system continues to produce the same outcomes.

That is productivity leakage.

The value exists at the task level, but dissipates before it becomes enterprise value.

AI Can Accelerate the Hidden Factory

This is where AI can be both powerful and dangerous.

If you apply AI to a well-designed process, it can improve speed, quality, decision support, and customer experience.

But if you apply AI to a broken system, it may accelerate the very dysfunction you are trying to remove.

A chatbot can answer customer questions faster — but if it is trained on unclear policies or lacks the right escalation path, it can create new risk. Air Canada learned this when its chatbot gave a customer incorrect information about bereavement fares, and a Canadian tribunal held the airline responsible for the misinformation. The visible issue was a chatbot error. The deeper issue was the system around the chatbot: policy clarity, validation, accountability, and customer promise ownership.

Legal work offers another example. AI can accelerate research and drafting, but if the workflow does not preserve verification and professional judgment, it can produce poor-quality outputs at scale. Reuters reported that two New York lawyers were sanctioned after submitting a brief that included fictitious case citations generated by ChatGPT. The problem was not simply that AI was used. The problem was that the work system failed to require adequate review before output became official.

AI can also create a new form of hidden work: the verification tax.

Workday’s 2026 research found that nearly 40% of AI time savings are lost to rework, including correcting errors, rewriting content, and verifying outputs from one-size-fits-all AI tools.

That should sound very familiar to anyone who has worked in TQM, Lean, or Six Sigma.

If every AI-generated output requires extensive checking, correction, and rework, the organization may simply be replacing one kind of labor with another. The process looks faster at the front end, but quality control has shifted downstream.

That is not transformation.

That is a new hidden factory.

AI Changes the System, Not Just the Speed

AI does not just change process speed. It can change cognitive behavior inside the system.

Some users overtrust the tool and skip verification. Others distrust it entirely and avoid using it. Some feel empowered. Others feel exposed or threatened. Some use approved tools. Others find their own unsanctioned shortcuts.

This is why AI adoption is not simply a technical implementation.

It is a redesign of the human system around the work.

If people overtrust AI, quality may suffer. If people distrust it entirely, adoption stalls. If people use it outside approved workflows, risk increases. If people are not trained in context, productivity gains may never materialize.

The leadership challenge is to design the system so AI supports judgment rather than replacing it.

Adoption Depends on Fit

Access does not equal usage. And usage does not automatically equal value.

Gallup’s workplace research has shown that AI adoption is rising, but frequent use and meaningful workflow change remain uneven. Their research also points to the importance of managerial support and intentional consideration of how AI fits into existing workflows.

That aligns with what I am seeing in the field.

When AI fits the work, people use it. When it feels bolted on, they work around it. When it improves the flow of work, adoption grows. When it adds uncertainty, people hesitate.

This is why organizations should not begin with the question, “What can we automate?”

They should begin with better questions:

  • What work creates value?

  • Where is the friction?

  • Where is judgment required?

  • Where is rework hiding?

  • Where are decisions delayed?

  • Where could AI help people do the work better and faster?

That is a very different conversation.

Lean First. AI Next.

This is where my own philosophy remains simple:

Fix the work before you automate it.

That does not mean waiting until everything is perfect. No process ever is.

But it does mean taking the time to understand the system before adding speed to it.

The goal is not to slow AI down. The goal is to make sure AI accelerates the right work.

Before deploying AI, leaders should ask whether they are solving the real problem or only the visible symptom. They should understand the end-to-end workflow, identify hidden rework, clarify roles and decision rights, validate whether the data can be trusted, and determine whether employees understand how AI supports the work they actually do.

They should also build feedback loops early. Not after deployment. Not after adoption stalls. Early enough to know whether the system is truly improving.

If those questions are not answered, AI may create the illusion of progress while the underlying system remains unchanged.

The New Version of an Old Lesson

Deming taught that quality is created by the system, not by slogans or heroic effort.

Lean taught us to remove waste and improve flow.

Six Sigma taught us to reduce variation and manage with data.

BPM taught us to see work end-to-end.

AI does not replace those disciplines.

It raises the stakes.

Because now we have tools that can move faster than our organizations are prepared to learn.

That is both the opportunity and the risk.

The leaders who get the most from AI will not be the ones who chase every tool or automate every task. They will be the ones who understand the system underneath the work — the patterns, structures, incentives, data, and assumptions that produce today’s outcomes.

AI can be a powerful accelerator.

But it will accelerate what your system is already designed to produce.

So before asking what AI can do, leaders should ask a more important question:

What is our system producing — and why?

Next
Next

Beyond the Hype: Why AI Needs an Operating Model – The Case for an AI Center of Excellence