A management consultancy. 18 staff, 7 years old, well-run by most measures. Monthly P&L reviews. Quarterly board updates. Weekly utilisation tracking. The kind of business that takes its reporting seriously.
In February, revenue was down 22% on the prior year. The MD called an emergency board meeting. Everyone was surprised.
They shouldn't have been. The problem started in November. They just weren't measuring it.
A Business That Tracked Everything — Except What Mattered
The firm's reporting covered the metrics you'd expect from a well-managed professional services business: revenue by month, project delivery rates, staff utilisation, client retention. Solid, consistent, reviewed regularly.
The problem wasn't the quality of the reporting. It was the type. Every single metric they tracked was a lag indicator — a measure of what had already happened. Revenue tells you what you billed. Utilisation tells you how your team was deployed. Delivery rates tell you what you completed.
None of those metrics tell you what's coming.
If you're not clear on the difference between lag and lead indicators, read this guide first — it covers the distinction in full and will make the rest of this case study click.
What Actually Happened in November
In November, the firm's new business pipeline quietly dried up. Fewer proposals going out. Win rates softening. Sales cycles getting longer. The average value of deals in active pipeline was shrinking.
None of this showed up anywhere in their reporting — because none of their metrics were designed to look forward. The pipeline signals existed. They just weren't being captured or reviewed.
The data was there. The problem was that nobody was measuring the right things — so nobody saw the warning until it was already three months too late.
By December, the pipeline was thin. By January, the gaps were appearing in the schedule. By February, they hit the P&L — and suddenly it looked like a crisis, when in reality it was a November problem that had been given 90 days to compound.
The Lead Indicators They Added
After the February review, the MD worked with an adviser to build a simple forward-looking dashboard alongside their existing financial reporting. Five metrics, reviewed weekly:
- Proposals sent per week
- Time from first meeting to proposal submitted
- Win rate on proposals (rolling 8 weeks)
- Average deal value across active pipeline
- Qualified meetings booked for the next four weeks
None of these are complicated. None of them require specialist data skills. They're counts and averages, drawn from a CRM that the team was already using.
The difference was that they were now being tracked deliberately, reviewed on a fixed cadence, and tied directly to a revenue forecast — not just reported after the fact.
What Changed
Within one quarter of running both lag and lead indicators together, the firm could see a potential revenue shortfall coming 8 to 12 weeks before it hit the P&L. That's not a small thing. Eight weeks is enough time to:
- Accelerate outbound business development activity
- Bring forward conversations with existing clients about follow-on work
- Adjust resource planning before gaps become costly
- Have a calm, strategic board conversation instead of an emergency one
The difference between reacting and responding is almost always time. Lead indicators buy you that time.
The Lesson
This firm wasn't bad at reporting. It was thorough, disciplined, and consistent. The mistake was structural: a reporting stack built entirely around lag indicators gives you a perfect rear-view mirror and no windscreen.
Most SMEs are in exactly this position. The financial reporting is handled. The operational metrics cover what already happened. And the forward-looking signals — the ones that would actually let you manage the business rather than just observe it — are nobody's job.
That's a BI problem. And it's one of the most fixable ones there is.