Millennial AI
Operations

Your AI initiative needs fewer models and more process design

Neha MazumdarJanuary 13, 202613 min read

TL;DR

  • --Automating a broken process just produces faster failures.
  • --Process mapping before AI implementation typically reveals 30-40% of steps can be cut entirely.
  • --AI deployments that work change the workflow, not just the tooling.
  • --Cross-functional ownership outperforms siloed AI projects.

The model obsession

As McKinsey's State of AI survey highlights, many mid-market companies assume AI success depends on model selection. Teams spend weeks evaluating whether GPT-4 or Claude or an open-source alternative is the right choice. They benchmark accuracy to the second decimal point. They run pilots against three providers.

Meanwhile, the process the model will serve remains exactly as it was designed in 2019. Data flows through the same convoluted path. Approvals pass through the same unnecessary checkpoints. The same team manually handles exceptions that could be categorized and routed automatically.

Model selection matters. But it matters far less than most organizations assume, and far less than process design. A mediocre model in a well-designed process will outperform an excellent model in a broken one.

We have run a rough comparison across our client work. Teams that spent 70% of project time on process redesign and 30% on model selection consistently saw better production outcomes than teams that inverted those ratios. The difference was not marginal. Process-first teams hit their ROI targets within 90 days of deployment at roughly twice the rate of model-first teams. The reason is straightforward: a clean process with clear inputs, defined decision points, and measurable outputs gives any model a better chance of performing well. As HBR's analysis of why data science projects fail confirms, a messy process with ambiguous inputs and undefined success criteria makes even the best model look unreliable.

What process mapping turns up

Before any AI implementation, we run a process-mapping exercise with clients. The results follow a pattern. Roughly a third of steps in a given workflow exist for historical reasons that no longer apply. They were added to compensate for a limitation in a previous system, satisfy a compliance requirement that has since changed, or accommodate a team member who left three years ago.

Removing these steps is basic operational improvement and a prerequisite for effective AI deployment. It reduces the surface area the AI system needs to handle. Fewer steps means fewer integration points, fewer failure modes, and a simpler system to maintain.

The exercise also reveals decision points that genuinely benefit from AI: moments where a human synthesizes multiple data sources, applies judgment under uncertainty, or performs pattern recognition across large datasets. These are the insertion points where AI adds value, as opposed to data-entry steps where simple automation is enough.

Signals that a process is ready for automation

Not every manual process is ready for AI. Some are too unstructured. Others depend on relationships and context that resist codification. Good candidates share specific, observable signals.

High volume with consistent structure is the clearest. If a team processes 50+ units per day (invoices, tickets, applications, orders) and each unit follows roughly the same shape, the process is a candidate. Volume matters because the AI system encounters enough examples to perform reliably. Consistent structure matters because the inputs are predictable enough to automate against.

Documented decision rules are the second signal. If the people doing the work can articulate their rules ("if the order value is over $10,000 and the customer has been with us less than 6 months, it goes to manual review"), those rules can be encoded. If the logic is purely intuitive ("I just know when something looks off"), the process is not ready. The intuitive judgment may be valuable, but it needs to be decomposed into explicit criteria before AI can assist.

Clear input-output definitions are the third. A process where the input is a structured form and the output is a categorized record is far more automatable than one where the input is "whatever the client sends us" and the output is "a recommendation based on experience." More ambiguity in inputs and outputs means more human judgment required, and harder automation.

Frequent handoffs between people or systems are a fourth signal, but a different kind. They signal not that the process is ready for AI, but that it is ready for redesign. Each handoff is a point where information gets lost, delayed, or misinterpreted. Reducing handoffs through process redesign often delivers value before any AI is involved. AI can then be applied to the streamlined process with better results.

SignalStrong candidateWeak candidate
Volume50+ units/day, consistent structureLow volume, irregular
Decision rulesDocumented, explicit criteriaPurely intuitive
Input/outputStructured, definedAmbiguous, variable
HandoffsMany (redesign opportunity)Few

Redesign the workflow, do not bolt AI on top

Bolt-on AI is a digital assistant that reads emails and summarizes them. Integrated AI is a redesigned communication workflow where the relevant information surfaces proactively and the emails become unnecessary.

Bolt-on AI preserves the existing process and adds a technology layer on top. Faster to deploy and easier to justify politically, because it requires zero behavior change. It is also where most AI projects plateau, because the incremental value of a smarter tool within a legacy process is inherently limited.

Integrated AI starts with a different question: if we were designing this workflow from scratch, knowing what AI can do, what would it look like? The answer is usually a different sequence of steps with different handoff points, data requirements, and roles.

Change management

Process redesign fails more often from people problems than technology problems. A well-designed AI-enhanced workflow that the team resists delivers less value than a mediocre workflow the team embraces.

The resistance is usually rational. People worry about job security, about being held accountable for AI errors they did not make, about losing skills and judgment they have built over years. Dismissing these concerns as "resistance to change" is a management failure. Addressing them directly is the job.

Transparency about intent is the starting point. If the AI initiative is meant to reduce headcount, say so. If it is meant to shift work from routine tasks to higher-value activities, say that and be specific about what those activities are. Vague reassurances ("your jobs are safe") followed by layoffs six months later destroy trust permanently. Honest communication, even when uncomfortable, preserves the organization's ability to make future changes.

Involve the affected team in design. The people who do the work every day understand its nuances better than any process map captures. They know which edge cases are common, which workarounds are load-bearing, which parts of the process are genuinely frustrating versus satisfying. A team that helps design the new workflow feels ownership. A team that has a new workflow imposed on them feels disposable.

Training matters, but not the way most organizations do it. A two-hour session three days before launch is almost worthless. What works: hands-on practice with the new workflow during a transition period where both old and new processes run in parallel. The team uses the AI-assisted process but can fall back when needed. Over two to four weeks, the new process proves itself (or reveals problems that need fixing). The transition period costs more than a hard cutover, but adoption rate is dramatically higher.

Expect a productivity dip in weeks one through three. Any process change, even an improvement, temporarily slows things down as people learn new patterns. Leadership needs to expect this and not panic. Pulling the plug on an AI workflow because week-two metrics are worse than pre-deployment metrics is a common and expensive mistake. Measure at week eight, not week two.

Cross-functional ownership beats silos

AI projects with cross-functional ownership consistently outperform those that live in one department. Most useful AI applications touch multiple parts of the organization, so this tracks.

A customer support AI system needs input from product (current issues), engineering (build and maintain), support (validate responses and handle escalations), and leadership (define what should be automated and what stays manual). Every group involved should have a seat in design. Otherwise the system develops blind spots that show up in production.

We recommend a lightweight steering group for each significant AI initiative: a technical lead, a process owner from the primary department, someone from the most affected adjacent team, and an executive sponsor who can resolve resource conflicts. This group meets biweekly for 30 minutes. Short meeting, narrow scope. They keep the system connected to operational reality.

The process-first playbook

MIT Sloan's guide on choosing your first AI project reinforces this: for companies considering their first or second AI deployment, we recommend a sequence that feels backwards to technology-oriented teams. Start with the process, not the technology.

Map current processEliminate unnecessary stepsIdentify AI insertion pointsSelect & implementMeasure at process level

This sequence takes longer to show visible results, but the results hold. Companies that skip straight to tool selection often achieve impressive pilot metrics followed by disappointing production outcomes. The gap between pilot and production is almost always a process gap, and closing it before you pick the model saves months.

Neha Mazumdar

Neha Mazumdar

Partner, Strategy & Digital Transformation

Three years at McKinsey taught her to diagnose a business problem in a week. She wanted to go further and make sure the fix got built. Runs client engagements and holds every project to the bar of a shipped product.

LinkedIn
Try our free AI assessment