Illustrative retail diagnostic: a mid-sized apparel retailer ran weekly promotions without measuring true return. POS showed lift; finance showed margin compression. The diagnostic found no structural link between promotional terms, COGS, and transactions — so ROI could not be calculated honestly.
Illustrative scenario — not a specific client.
Picture twenty-eight stores plus a website, and a promotional calendar that has been running so long nobody can remember who set the rhythm. End-of-season clearance, category weeks, double loyalty points — the mechanics change slightly each season, but the cadence is fixed. Buying looks at scan data and says a week went well when units moved. Finance looks at the same week after rebates, freight, and markdown accruals land, and sees margin under pressure. Nobody is lying. They are simply not measuring the same thing, and nobody had written down what “success” should mean before the promotion went live.
In meetings, the same conversation repeated. Someone from the floor would say the till does not lie — they saw the queues. Finance would push back: show me margin by SKU for that promotion window, and the room would go quiet, because the promotion existed in a spreadsheet in buying and as a loose bundle of discounts at the till, and nobody could draw a straight line between them. People also admitted, almost sheepishly, that they never ran the same promotion twice with identical mechanics, so year-on-year comparison was already muddy. When someone asked what would have sold without the promo — the baseline — three different people had three different spreadsheet methods.
A diagnostic was brought in to answer a blunt question: can this business actually measure promotional return, or is the data environment making that impossible no matter who you hire?
The review did not touch the POS vendor or the ERP. It looked at what the systems actually stored: line-level discounts, whatever codes staff had used, the buying team’s plans, and how finance recognised cost and rebates. The picture was clear enough without new software.
What turned up first was awkwardly simple. Buying tracked plans in spreadsheets and email threads. The point of sale applied discounts in three different ways — a proper promotion code when someone had bothered to set one up, a percentage keyed in by a supervisor, or a manager override with no code at all. When the diagnostic sampled promotional weeks, fewer than forty percent of discount lines carried an identifier that matched anything on the buying calendar. The rest washed into generic buckets like “markdown” or “promotion — other.” You cannot compute incremental volume, margin given up, or cannibalisation of full-price lines when you cannot say which transaction belonged to which planned event. Volume was there. The story of why margin moved was not in the data.
The second issue was timing. Promotions ran in weekly windows; finance recognised supplier rebates and landed cost adjustments monthly, and some co-op money landed in the ledger after the promotional window had already been judged a success in a buying meeting. Buying talked about margin as selling price minus standard cost straight off the POS. Finance talked about actual landed cost and allocated rebates. Again, both could be reasonable — but comparing them without a single rulebook meant every post-mortem was really two departments talking past each other.
The third was the baseline problem. Uplift sounds scientific until you ask what you are comparing against. Some analysts used the four weeks before the event. Others used the same week last year. Someone else blended both. Leadership received different answers for the same promotion depending on who ran the spreadsheet. There was no documented method — and therefore no way to settle a dispute about whether the promo paid for itself.
So the problem was not that people were careless. It was that nobody had been given authority to say: this is how we define promotional margin, this is how we tie a till line to a plan, and this is how we define baseline — and until that happens, another analytics dashboard only paints the confusion faster.
After the diagnostic, the retailer did the unglamorous work before buying anything new. They built a promotion master: every event got a real ID, dates, eligible products, and mechanics, and the POS was configured so discounts had to flow through that ID except in rare, coded exceptions. Manager overrides were tightened — a small set of reasons, no more ad-hoc percentages because the queue was long. Buying and finance signed one page on what promotional margin meant: which cost basis, how rebates attached to which window, and when recognition counted. They picked a baseline method — same store, same week prior year, with explicit rules for stockouts and odd calendar weeks — so at least everyone argued from the same counterfactual. And they named people: who owned the calendar, who owned cost allocation, and who reconciled the two when they disagreed.
The goal was never perfect attribution. It was to get to a place where leadership could say, with a straight face, “we know enough to run that again or kill it” — and mean it.