AI strategy for mid-market companies: the playbook nobody wrote for you
Published by Abhinav Palaparthy, Partner, AI & Data at Millennial AI. Education: MBA, IIM Indore; B.E., BITS Pilani. Previously at: McKinsey, ExxonMobil.
Published on January 20, 2026. Category: AI Strategy.
Summary: 91% of mid-market companies are using AI tools, and most of them have nothing to show for it because they followed enterprise playbooks designed for companies with 10x their resources. The highest-ROI AI projects are boring ones — invoice matching, support triage, data cleanup — where the math is obvious and the team actually wants the help. Your first AI idea is almost always wrong; the second one, shaped by what you learn during diagnosis, typically delivers 4x the value. A good AI strategy for a mid-market company fits on one page and produces a measurable result within 90 days.
Your company is stuck in the AI middle
You run a company doing somewhere between $20M and $500M in revenue. You have a couple hundred employees, maybe a thousand. You sell real products to real customers, you have actual operations with actual complexity, and you have been hearing about AI for three years straight. So you did something about it. You bought a few licenses, ran a pilot or two, maybe hired a consultant who produced a nice deck. And now you are sitting in a Monday morning leadership meeting where someone asks, "What exactly did we get from that AI investment?" and the room goes quiet. You are far from alone. According to the [RSM 2025 Middle Market AI Survey](https://rsmus.com/insights/industries/middle-market/ai-survey-2025.html), 91% of mid-market firms are already using generative AI in some capacity. That number sounds impressive until you read the next line: 53% of those same firms say they feel only "somewhat prepared" to use AI effectively. Nearly four in ten lack the in-house expertise to do anything meaningful with the tools they have already purchased. And 92% reported hitting significant challenges during rollout. So the adoption is real. The results are mostly absent. This is what I call the AI middle. You are past the point of ignoring AI. Your board has asked about it. Your competitors mention it in their earnings calls. Your ops team is quietly using ChatGPT to draft emails and hoping nobody notices. But you are nowhere near the point where AI is changing how your business actually runs. You are stuck between awareness and impact, and every week you stay stuck, you are spending money and executive attention on something that has not paid you back. The frustrating part is that the resources available to help you were written for someone else entirely. Go search for "AI strategy" and you will find frameworks designed for companies with $10M AI budgets, 200-person IT departments, and a Chief AI Officer who reports to the CEO. You will find Gartner quadrants and Forrester waves that rank software costing more than your entire technology budget. You will find case studies from JP Morgan and Walmart and Google, companies whose annual AI spend exceeds your total revenue. Mid-market companies represent roughly one-third of private-sector GDP, according to the [World Economic Forum](https://www.weforum.org/stories/2026/01/ai-for-smes-small-medium-enterprises/). You are the economic backbone. But the AI industry treats you as an afterthought, a scaled-down version of the enterprise, as if your problems are just smaller versions of Fortune 500 problems. They are different problems. Your constraints are different, your advantages are different, and the playbook that works for you looks nothing like what gets published after a $50B conglomerate's offsite. I have spent the last four years working with mid-market companies on AI strategy through ai4builders and now at Millennial AI. I have told dozens of leadership teams that their first AI idea was wrong. Most of them thanked me for it later because the second idea, the one we found together during diagnosis, ended up delivering real results. This post is the playbook I wish existed when I started. No jargon, no vendor pitches, no frameworks that require a team of 30 to execute. Just the strategy that actually fits your budget, your team, and your timeline.
Why enterprise AI playbooks fail at your scale
A $15B financial services company hires a Chief AI Officer, builds a 40-person AI Center of Excellence, spends $8M on data infrastructure, and 18 months later launches a fraud detection model that saves them $200M per year. Great story. Terrible template for a $120M manufacturing company with a five-person IT team and a data warehouse that is actually just a shared Google Drive folder. But that is exactly the template mid-market companies keep trying to follow, because it is the only one anyone has written down. So let me walk through specifically what breaks. The org chart breaks first. Enterprise playbooks tell you to hire a Chief AI Officer or a Head of AI. For a 200-person company, this means paying $250K-$400K for a senior hire who will spend their first six months building a team, their next six months building infrastructure, and their first year producing exactly zero deployed AI systems. I have watched this happen three times in the last two years. In each case, the company would have been better off spending $80K on a focused engagement that put a working model into production within two months. The budget math breaks next. [McKinsey's 2025 State of AI report](https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai) found that 88% of companies are using AI somewhere, but two-thirds of them are stuck in pilot mode, unable to move from proof of concept to production. Only about 5.5% qualify as high performers who see real EBIT impact. Over 80% reported no tangible effect on their bottom line. For enterprise companies with deep pockets, a perpetual pilot is an annoyance. For a mid-market company that allocated $150K to an AI initiative, a perpetual pilot is a dead investment that makes the CFO skeptical of every future AI proposal. Mid-market AI budgets typically range from $50K to $500K. That is enough to do something real, but only if you skip the infrastructure buildout phase that enterprises treat as step one. The timeline breaks too. Enterprise AI projects run 6 to 12 months as a baseline, often longer. Mid-market companies cannot afford to wait a year for results. Your board reviews quarterly. Your competitors are moving. Your team's enthusiasm for "the AI project" has a half-life of about 90 days. If you have not shown a measurable result by then, you have lost organizational momentum, and momentum is the hardest thing to rebuild. The talent model breaks completely. Enterprise playbooks assume you have ML engineers, data engineers, a DevOps team familiar with model deployment, and analysts who can evaluate model performance. You have a senior developer named Marcus who watched some YouTube tutorials on Python and a BI analyst who is very good at Excel. This is normal. Marcus and your BI analyst are perfectly capable of supporting an AI deployment. But the approach has to match the team you actually have, which means simpler architectures, managed services, and a partner who writes production code instead of handing you a strategy deck and wishing you luck. The data story breaks last. Enterprise companies have data lakes, data governance policies, and teams whose entire job is data quality. You have a CRM that sales reps update when they feel like it, an ERP that was implemented in 2019 and never fully configured, and a collection of spreadsheets that contain critical business logic known only to your controller named Diane. This is standard for mid-market. And the right AI strategy starts with this reality instead of pretending you can transform your data estate before doing anything useful. The pattern I see over and over: a mid-market company reads about what Salesforce or Microsoft is doing with AI, tries to replicate a miniature version of it, hits every one of these walls, and concludes that "we are not ready for AI." That conclusion is wrong. You are ready. You just need a different approach entirely.
Finding AI opportunities worth your budget
Every mid-market CEO I talk to has an AI idea. Usually it is a chatbot. Sometimes it is a recommendation engine. Occasionally it is something truly ambitious, like rebuilding their entire pricing model with machine learning. And almost every time, their first idea is the wrong place to start. That sounds harsh, so let me explain what I mean. The first idea is usually wrong because it was generated by pattern-matching against what they saw at a conference or read in a case study, rather than by looking at where their own operations are bleeding time and money. A chatbot is exciting. Invoice matching is boring. But invoice matching might be costing you $180K per year in labor and errors, while a chatbot might save your customer support team 20 minutes a day. The boring project has a 6-month payback. The exciting project has a PowerPoint slide. I use a simple scoring framework when I evaluate AI opportunities for mid-market clients. Four variables, multiplied together: Hours per week spent on the manual version of the task, times the error rate of the current process, times the cost of each error, times the team's willingness to adopt a new tool. That last variable is the one most people skip, and it matters enormously. An AI system that the team resists using will never deliver its theoretical ROI. An AI system that the team is begging for will overperform projections because people will find new ways to use it that you did not anticipate. Let me run through three real examples from our diagnostic work. Example one: accounts payable invoice matching at a $65M distribution company. Their AP team spent 25 hours per week manually matching invoices to purchase orders and delivery receipts. Error rate was around 3%, which sounds low until you calculate that each error triggered an average of $200 in downstream costs from payment delays, vendor disputes, and rework. That works out to roughly $800 per week in error costs, plus 25 hours of labor at $35 per hour loaded. The AP team was frustrated and vocal about wanting help. Score: high on every dimension. We deployed an invoice matching model in six weeks. It handles 82% of matches automatically with a 0.4% error rate. The AP team now spends those 25 hours on vendor negotiations and early payment discounts, which generated $140K in savings in the first year on top of the direct labor savings. Example two: customer support triage at a $90M SaaS company. Their support team received about 200 tickets per day across email, chat, and their portal. A senior agent spent roughly 15 hours per week just reading tickets and routing them to the right specialist team. Misrouted tickets added an average of 4 hours to resolution time, and they misrouted about 12% of tickets. The support team was overwhelmed and willing to try anything that reduced the noise. Score: high across the board. We built a classification model that reads incoming tickets, identifies the category and urgency, and routes them with 94% accuracy. Resolution time dropped 31% in the first quarter. Customer satisfaction scores went up because customers stopped getting bounced between teams. Example three: quarterly financial reporting at a $45M professional services firm. The CFO wanted to use AI to automate portions of their quarterly reporting process. The finance team spent about 40 hours per quarter compiling data from six systems into the reporting templates. So the time commitment was real. But the error tolerance was near zero since these reports go to investors and lenders. And the finance team was deeply skeptical. They had been burned by a software migration two years ago and did not trust automated outputs. When I talked to the controller, she said, "If this thing puts a wrong number in front of our lender, I am the one who gets the phone call at 7am." Score: low, despite the high time investment. We recommended they start somewhere else, build organizational trust in AI through a lower-stakes win, and revisit financial reporting in six months when the team had seen AI work reliably on a different process. The pattern across these examples is consistent. The best AI opportunities for mid-market companies share three traits: the time cost is measurable and recurring on a weekly basis, the error cost is real and quantifiable in actual dollars, and the people doing the work today genuinely want the help. <table><thead><tr><th>Workflow</th><th>Hours/Week</th><th>Error Rate</th><th>Cost per Error</th><th>Team Willingness</th><th>Verdict</th></tr></thead><tbody><tr><td>AP invoice matching</td><td>25+</td><td>2–5%</td><td>$200+</td><td>High</td><td>Strong candidate</td></tr><tr><td>Support ticket routing</td><td>15+</td><td>Moderate</td><td>Medium</td><td>High</td><td>Strong candidate</td></tr><tr><td>Quarterly financial reporting</td><td>High</td><td>Near zero tolerance</td><td>Very high</td><td>Skeptical</td><td>Not the right starting point</td></tr></tbody></table> When I run diagnostic sessions with clients, I ask every department head the same question: what does your team spend time on that they would describe as tedious, repetitive, and high-volume? The answers reveal AI opportunities that no amount of top-down strategy planning would uncover. Your warehouse team knows that inventory counts take 30 hours per month. Your recruiters know that resume screening takes 10 hours per open role. Your accounts receivable clerk knows that chasing late payments follows the same pattern every single time and could probably be scripted. These are not glamorous use cases. They will never make the cover of Harvard Business Review. But they pay for themselves in weeks, they build your team's confidence in AI as a practical tool, and they create the foundation for more ambitious projects down the road.
Eight weeks from diagnosis to production
Most AI consulting engagements follow a predictable pattern: three months of discovery, a 60-page strategy document, a recommendation to build a data platform, and a Phase 2 proposal that costs more than Phase 1. The client gets a beautiful roadmap. Nothing gets deployed. Twelve months later, the roadmap is outdated and the budget is gone. At Millennial AI, we built The Millennial Method specifically to break that pattern. Four phases, eight weeks from first conversation to working system in production. Phase one is Diagnose, and it takes one to two weeks. We audit three things: your operations (where time and money are actually being lost), your data (what exists, where it lives, how clean it is), and your competitive environment (what your peers and competitors are doing with AI, and where the gaps create opportunity). The output is an AI opportunity matrix that scores every potential use case on the framework I described in the previous section. This phase also kills bad ideas early. If a CEO walks in wanting a customer-facing chatbot but their internal data is a mess, we say so on day five instead of day ninety. Being direct about what will not work saves everyone time and money. Phase two is Design, and it takes one to two weeks. We take the highest-scoring opportunity from the matrix and work through three decisions: the technical approach (off-the-shelf API, fine-tuned model, custom build, or some combination), the integration path into existing systems, and the success metrics that the client's team can verify independently. The deliverable is a one-page project plan. One page. If your AI strategy does not fit on one page, it is too complicated for a mid-market company to execute. I have seen 40-page AI strategy documents that took two months to write and were obsolete before the ink dried. One page forces clarity. Phase three is Deploy, and it takes four to eight weeks depending on the complexity of the integration. We work in two-week sprints with a working demo at the end of each sprint. The client sees a functioning system handling real data at week two, provides feedback, and sees an improved version at week four. By week six or eight, the system is running in production on actual workflows. The critical difference from enterprise engagements is that the same person who wrote the one-page project plan is writing the production code two weeks later. No handoff from the strategy team to the implementation team, because those are the same people. Handoffs are where mid-market AI projects go to die. Every handoff introduces a two to four week delay, a round of re-explanation, and a loss of context that degrades the final product. Phase four is Scale, and it runs ongoing after the initial deployment. Once the first workflow is live and delivering measurable results, we help the client identify the next opportunity from the matrix, train their internal team to manage the deployed system, and build the organizational muscle for future AI work. We also handle something most AI consultants ignore completely: helping the client communicate their AI capabilities to their market. A mid-market company that deploys AI-driven operations can use that story to win contracts, attract talent, and differentiate against larger competitors who are still stuck in pilot mode. <div class="flow-row"><span class="flow-step">Diagnose (1–2 weeks)</span><span class="flow-arrow">→</span><span class="flow-step">Design (1–2 weeks)</span><span class="flow-arrow">→</span><span class="flow-step">Deploy (4–8 weeks)</span><span class="flow-arrow">→</span><span class="flow-step">Scale (ongoing)</span></div> Let me make this concrete with a specific example. A $40M logistics company came to us last year. They managed regional freight routes across the Southeast, and their demand forecasting was done in Excel by an operations manager named Ray who had been with the company for 22 years. Ray was good at it. His gut instinct about seasonal patterns was genuinely impressive. But he was also a single point of failure, he could not explain his methodology to anyone else, and his forecasts were increasingly off as the company's route network expanded into new markets where his intuition did not apply. Week one: we audited their operations and data. Their ERP had three years of clean shipment data with pickup dates, delivery dates, volumes, and route information. Their CRM was a disaster, but we did not need it for demand forecasting. Week two: we designed a forecasting model using their historical shipment data combined with publicly available economic indicators and seasonal patterns. Weeks three and four: first working model, tested against twelve months of historical data. It outperformed Ray's Excel forecasts by 19% on mean absolute error. Weeks five through eleven: iterative improvement based on Ray's feedback (he pointed out edge cases the model missed around holidays and weather events), integration with their routing software through the existing API, and training the ops team to interpret and act on the forecasts. Eleven weeks from initial call to deployed system in daily use. The demand forecasting model reduced their excess capacity costs by 34% in the first quarter, which translated to roughly $380K in annual savings. Ray, instead of spending 15 hours a week on forecasting, now spends that time on route optimization and carrier negotiations where his 22 years of relationships actually matter. He describes the AI model as "the best hire we never made." That is what eight weeks to production looks like when you match the approach to the company instead of forcing the company into an enterprise framework.
Five mistakes that burn mid-market AI budgets
I have watched mid-market companies waste money on AI in remarkably consistent ways. The same five mistakes show up regardless of industry, company size within the mid-market range, or how smart the leadership team is. Each one is entirely avoidable if you know what to watch for. Mistake one: buying enterprise software for a mid-market problem. A 50-person sales team does not need a $200K annual Salesforce Einstein license. A 30-person customer service department does not need a $150K conversational AI platform built for companies handling 50,000 tickets a day. Enterprise AI vendors are very good at demos. They show you capabilities that are genuinely impressive and completely irrelevant at your scale. I worked with a $70M retail company that spent $180K on an AI-powered demand planning suite designed for companies with 500+ SKUs across dozens of locations. They had 120 SKUs and four warehouses. A custom model built on their actual data, deployed in six weeks, outperformed the enterprise software on every metric and cost $45K including the first year of maintenance. The enterprise license sat unused for 14 months before they finally cancelled it and ate the early termination penalty. Total waste: north of $220K. Mistake two: hiring a data science team before defining a data problem. This is the most expensive mistake on the list. A mid-market company decides they need "AI capabilities," so they hire a data scientist ($140K), a data engineer ($130K), and a junior ML engineer ($110K). Year-one cost with benefits, equipment, and software licenses: north of $450K. These three talented people then spend their first four months building data infrastructure because nothing is set up for model development. The next three months they explore potential use cases, building proof of concepts that nobody asked for. By month eight the CEO is asking why the AI team has not produced anything the business can use. The team is frustrated. They were hired without a defined problem, without clean data to work with, and without a connection to the operational teams who actually know where the pain points are. A better sequence: hire an external team to define and deliver your first two AI projects, then bring on one internal person to maintain and extend what has been built. That first hire should be an ML engineer who can operationalize deployed systems and keep them running. Mistake three: starting with the exciting project instead of the profitable one. Every company wants to build the impressive AI feature. The customer-facing chatbot. The predictive analytics dashboard. The AI-generated content engine. These projects are visible, demo well in board meetings, and are almost never the highest-ROI starting point. A $55M professional services firm wanted to build an AI tool that would automatically generate proposals based on past winning bids. Interesting idea with genuine long-term potential. But their accounts payable process was losing $12K per month in late payment penalties, duplicate payments, and manual rework. We convinced them to start with AP automation. It paid for itself in 11 weeks. Then we built the proposal generator using savings from the AP project, and the CFO became AI's biggest internal champion because the first project spoke her language: dollars recovered, line items improved, cash flow strengthened. If we had started with the proposal generator, a project with fuzzy ROI and a longer timeline, the CFO would have been skeptical of AI for years. Mistake four: skipping the data audit. Building AI on bad data produces bad AI that people learn to distrust. I have seen companies spend $80K building a lead scoring model on a CRM where 40% of records had missing fields, 15% had duplicate entries, and the "last contacted" field had not been reliably updated since 2023. The model technically worked. It produced scores. The scores were meaningless because the input data was meaningless, and the sales team figured that out within two weeks and stopped using it entirely. A two-week data audit before any model building would have caught these issues. The right response might have been to clean the CRM data first, which is still a valuable project that costs $15K-$30K and improves everything downstream. Every AI engagement should start with a brutally honest assessment of data quality. If your data is a mess, the first project is fixing the data, and that project will deliver value on its own even if you never build a model on top of it. Mistake five: treating AI as a technology project instead of an operations project. The [World Economic Forum estimates](https://www.weforum.org/stories/2026/01/ai-for-smes-small-medium-enterprises/) that up to 95% of AI pilots fail to reach production. That number is staggering, and the primary reason behind it is organizational. Models work. APIs are reliable. Cloud infrastructure is mature enough that deployment is a solved problem. Pilots fail because the organization does not change how it operates around the new capability. An AI model that automates invoice matching only delivers value if the AP team actually uses it daily, if the exceptions process is redesigned to handle the 18% of invoices the model cannot match, if approval workflows are updated to reflect the new speed, and if someone monitors accuracy weekly and flags drift. A company that hands this off to IT and says "make the AI thing work" will get a technically functional system that nobody uses. AI projects need a business owner from the affected department who defines what success looks like and holds the team accountable to adoption metrics alongside technical ones.
How to know if your AI strategy is working
Most companies I work with have no idea whether their AI efforts are on track or off the rails until it is too late. They measure the wrong things (model accuracy instead of business impact) or they measure nothing at all and rely on vibes. Here is what a healthy AI initiative looks like at 30, 90, and 180 days, with specific markers and warning signs at each stage. At 30 days, you should have three things completed. First, a data audit that tells you exactly what data you have, where it lives, how clean it is, and what gaps exist. This does not need to be a 50-page report. A structured spreadsheet that lists every relevant data source, scores it on completeness and accuracy, and flags what needs to be fixed before any model touches it will do. Second, your first AI workflow should be identified and scoped, with specific success metrics attached. "Improve efficiency" is not a success metric. "Reduce invoice matching time from 25 hours per week to 5 hours per week with an error rate below 1%" is a success metric. Third, you should have baseline measurements for every metric you plan to improve. If you do not measure the before, you cannot prove the after, and you will spend months arguing about whether the AI project actually delivered value. At 90 days, you should have your first AI workflow running in production with real users on real data. A production system that people use as part of their daily work because it makes their job easier and they choose to use it. You should have measurable results: hours saved per week, errors reduced by a specific percentage, dollars recovered or costs avoided with a real number attached. The number does not need to be massive at this stage. A 30% reduction in time spent on a specific task is a legitimate 90-day result for a mid-market company. The team using the system should be able to explain, in their own words without coaching, what the AI does for them and why they prefer it to the old way of working. If they cannot do that, adoption is fragile and will erode the moment you stop paying attention. At 180 days, the picture should look materially different from where you started. A second workflow should be deployed or in active development. The team should be using AI tools without being prompted or reminded by management. You should see people finding new applications on their own, coming to leadership with ideas like "could we use something similar for the returns process?" That organic demand is the strongest signal that your AI strategy is working. You should also have a clear backlog of future AI initiatives, prioritized by the same scoring framework used to select the first project, and that backlog should reflect what you learned during the first two deployments about what works in your specific environment. Now for the warning signs that something has gone wrong. If you are 90 days into an AI initiative and you are still talking about strategy, still evaluating vendors, still debating which use case to start with, the initiative has stalled. The diagnosis phase should take two weeks. If it takes three months, the problem is almost always indecision or a lack of clear ownership. If conversations about your AI project center on model accuracy percentages, F1 scores, or technical architecture decisions rather than business outcomes, you have drifted from an operations project into a technology project. Pull it back. Ask the team: how many hours did this save last week? How many errors did it catch? What is the dollar value of what it produced? If nobody can answer those questions, the project has lost its connection to the business case that justified it. The most dangerous warning sign is the perpetual pilot. This is the AI project that keeps getting extended by another month, another quarter, because it is "almost ready" or needs "a bit more training data" or requires "one more integration before we can go live." A pilot that has not reached production in 90 days is overwhelmingly likely to never reach production. Kill it, document what you learned, and start the next project with a harder deadline and a clearer definition of done. The sunk cost is real, but the ongoing opportunity cost of keeping a dead project on life support is worse. Track one number above all others: days from project kickoff to first dollar of measurable business value. For mid-market companies working with the right approach, that number should be under 90.
What to do this week
I have given you a framework, a methodology, worked examples, and a list of mistakes to avoid. None of it matters if you do not take a concrete step this week. So here are three things you can do in the next five business days, each of which takes less than two hours. First, pick your most painful manual workflow and time it. Literally sit with the person who does the work and watch. Do not rely on estimates from a manager two levels removed. Go to the person who touches the invoices, reads the tickets, compiles the reports. Count the hours per week. Count the errors per month by looking at rework, corrections, and complaints. Estimate what each error costs in rework time, customer impact, penalties, or delayed revenue. Write those three numbers down. You now have the raw material for an ROI calculation, and in my experience, the total is always larger than leadership expects. The $65M distribution company I mentioned earlier had no idea their AP process was costing them $180K per year in fully-loaded labor and error costs until someone actually sat down and counted. Their CEO said, "I knew it was bad. I did not know it was six figures bad." Second, ask your team a specific question: if we could automate one repetitive task you do every week, what would give you the most time back? Do not ask leadership. Ask the people doing the work. Send a three-question survey or walk around the office and ask face to face. The answers will not match what you expect. Leadership usually assumes the biggest opportunity is in sales or customer-facing operations because those are the most visible. The people doing the work usually point to back-office processes: data entry, reconciliation, report generation, compliance documentation, scheduling, and follow-up emails that follow the same template every time. The gap between what leadership thinks is painful and what the team actually experiences as painful is where the highest-ROI AI projects are hiding. Closing that gap takes one afternoon of honest conversation. Third, talk to someone who has done this at a company your size. Not a vendor demo where a sales engineer shows you a polished product running on sample data in a controlled environment. A real conversation with someone who has built and deployed AI for companies in the $20M to $500M revenue range. Ask the uncomfortable questions: what went wrong that you did not expect? What took twice as long as the timeline said it would? What would you skip entirely if you did it again? What did the project actually cost when you added up everything, including the internal time your team spent? If you do not know anyone who fits that description, we are happy to be that conversation. Millennial AI offers a free diagnostic call. Thirty minutes, no slides, no pitch deck. We look at your operations, tell you honestly where AI fits and where it does not fit today, and give you a prioritized list of opportunities with rough ROI estimates attached. If the honest answer is "you are not ready yet and here is what to fix first," we will tell you that directly. The mid-market AI gap is real, but it is closing fast. The companies that figure this out in 2026 will have a structural advantage over competitors who are still running pilots in 2027. The playbook is simpler than the industry wants you to believe. Find the painful workflow. Validate the math. Deploy fast. Measure everything. Scale what works. You do not need a $10M budget or a 200-person IT team. You need the right approach and the discipline to start with what matters instead of what sounds impressive.



