The model obsession
There is a persistent belief in mid-market companies that AI success depends on model selection. Teams spend weeks evaluating whether GPT-4 or Claude or an open-source alternative is the right choice for their use case. They benchmark accuracy to the second decimal point. They run pilots against three different providers.
Meanwhile, the process that the model will serve remains exactly as it was designed in 2019. Data flows through the same convoluted path. Approvals pass through the same unnecessary checkpoints. The same team manually handles exceptions that could be categorized and routed automatically.
Model selection matters. But it matters far less than most organizations assume, and it matters far less than process design. A mediocre model deployed in a well-designed process will outperform an excellent model deployed in a broken one.
What process mapping reveals
Before any AI implementation, we conduct a process-mapping exercise with clients. The results are consistently surprising. On average, 30-40% of the steps in a given workflow exist for historical reasons that no longer apply. They were added to compensate for a limitation in a previous system, to satisfy a compliance requirement that has since changed, or to accommodate a team member who left three years ago.
Removing these steps is not an AI initiative. It is basic operational improvement. But it is a prerequisite for effective AI deployment because it reduces the surface area that the AI system needs to handle. Fewer steps means fewer integration points, fewer failure modes, and a simpler system to maintain.
The exercise also reveals the decision points that genuinely benefit from AI assistance -- the moments where a human is synthesizing multiple data sources, applying judgment under uncertainty, or performing pattern recognition across large datasets. These are the insertion points where AI adds value, as opposed to the data-entry steps where simple automation (no AI required) is sufficient.
Redesigning workflows for AI, not bolting AI onto workflows
The difference between bolt-on AI and integrated AI is the difference between a digital assistant that reads emails and summarizes them versus a redesigned communication workflow where the relevant information surfaces proactively without anyone reading the emails in the first place.
Bolt-on AI preserves the existing process and adds a technology layer on top. It is faster to deploy and easier to justify politically, because it does not require anyone to change their behavior. It is also where most AI projects stall, because the incremental value of a smarter tool within a dumb process is inherently limited.
Integrated AI starts with the question: if we were designing this workflow from scratch, knowing what AI can do, what would it look like? The answer is almost never the current process plus a chatbot. It is usually a fundamentally different sequence of steps with different handoff points, different data requirements, and different roles for the humans involved.
Cross-functional ownership beats siloed projects
AI projects that live entirely within one department -- IT builds it, or marketing owns it, or operations runs it -- consistently underperform those with cross-functional ownership. Most useful AI applications touch multiple parts of the organization, so this is not surprising.
A customer support AI system needs input from product (to understand current issues), engineering (to build and maintain it), support (to validate responses and handle escalations), and leadership (to define what should and should not be automated). When any of those groups is left out of the design process, the system develops blind spots that show up in production.
We recommend a lightweight steering group for each significant AI initiative: a technical lead, a process owner from the primary department, someone from the most affected adjacent team, and an executive sponsor who can resolve resource conflicts. This group meets every two weeks for 30 minutes. The meeting is short because the scope is narrow -- they are not redesigning the system, just making sure it stays connected to operational reality.
The process-first playbook
For companies considering their first or second AI deployment, we recommend a sequence that feels backwards to technology-oriented teams. Map the current process in detail, including the informal workarounds nobody has documented. Eliminate unnecessary steps through basic process improvement. Identify the decision points where AI can genuinely improve outcomes. Then select and implement the AI solution for those specific points. Measure impact at the process level, not the model level.
This sequence takes longer to show visible results, but the results hold. Companies that skip straight to tool selection often achieve impressive pilot metrics followed by disappointing production outcomes. That gap between pilot and production is almost always a process gap, not a technology one.
Millennial AI
AI Consultancy
Millennial AI is a team of five partners covering AI strategy, engineering, growth marketing, operations, and finance. We write about the intersection of AI capability and operational reality for mid-market companies.
LinkedIn