Glow Group/ The Glow Report/ Archive/ The Tyranny of the Performance Dashboard
The Glow Report · Vol IV · Essay · LIVE

The tyranny of the performance dashboard.

An instrument invented to measure things is now being used to decide them. The category’s quiet disaster.

Author
Ned Halloran
Published
April 2026
Reading time
9 minutes
Volume
No. IV · Q2 2026
brand.performance / live — not a real dashboard
Revenue · 7 days
↑ 6.4%
+$84,210 vs. prior week
ROAS · trailing 30d
3.8×
+ 0.4× vs. prior month
Repeat rate · NEW cohort
— 4.1pp
Now 21.8% · target 38%
Ignored · 18 months
Category share · ANZ grocery
— 280 bps
4 consecutive quarters
Not on board deck
Fig. 01 · What the dashboard shows and what it hides Illustrative composite · based on 4 Tier C briefs 2024–26

Contents

  1. I An instrument of measurement, now used to decide
  2. II What the dashboard can see
  3. III What the dashboard cannot see
  4. IV The meetings it has ruined
  5. V Re-tooling: when to look up

§I An instrument of measurement, now used to decide.

The performance dashboard was invented to help operators see, in close to real time, what had already happened. It did this job well. Then, somewhere between 2014 and 2019, it drifted quietly from a measurement instrument into a decision instrument. Most consumer brand teams today do not treat the dashboard as evidence about the past. They treat it as instruction for the future. The difference is enormous, and it is the subject of this essay.

There is a reasonable version of dashboard-led management. In that version, the dashboard surfaces problems, the leadership team examines them, decisions are made with reference to a wider brief that includes non-dashboardable inputs — category context, shopper research, pack judgement, the founder’s editorial line. That version is increasingly rare. The more common version is that the dashboard surfaces numbers, the numbers become the agenda, and the agenda becomes the strategy. Strategy collapses into optimisation.1

The collapse is quiet because the metrics themselves look healthy. ROAS holds. Revenue grows month-on-month. NPS drifts up by half a point. The brand meanwhile is losing 280 basis points of category share a year, losing distinctive asset recognition at six-monthly survey, and posting a repeat rate that has fallen for eighteen consecutive months. None of these sit on the dashboard the leadership team looks at. They sit on a slower dashboard — or not on any dashboard at all. A slower dashboard is not a dashboard a marketing director checks at 9am. So it becomes, in effect, invisible.

A brand in decline is almost always a brand whose dashboard looked fine right up until the moment it didn’t. — Glow field note, Tier C audit, 2025

§II What the dashboard can see.

To be clear about what we are critiquing: the dashboard is a useful instrument. It sees, in something close to real time, a set of things that used to take weeks to see. It sees campaign performance, promotional lift, channel mix, incremental revenue attribution, SKU-level velocity, retailer-by-retailer depletion. In a disciplined hands, these inputs make marketing spend more efficient, help cut dead SKUs, inform retailer trade conversations with evidence. None of what follows argues that those benefits are illusory. They are real and they are worth keeping.

The problem is not that the dashboard is wrong. The problem is that it is complete about what it measures and silent about what it does not. The silence is not observable through the dashboard. It has to be noticed by a human with a wider brief.

§III What the dashboard cannot see.

A partial list, drawn from twenty audits of Tier C consumer brands we have run in the last three years:

What the dashboard sees

Near-term, dashboard-visible.

  • Revenue, week-on-week
  • ROAS on paid media
  • Channel mix and CPA
  • Promotional lift
  • Unit velocity at listed SKUs
  • Bounce rate, conversion, basket size
  • Subscriber count, open rates
  • NPS on the last 30 days
What the dashboard will not see

Long-horizon, brand-load-bearing.

  • Category share over 5 years
  • Distinctive asset recognition
  • Repeat rate by cohort, not aggregate
  • Retailer margin-fit relative to category
  • Shopper language drift (how they describe you)
  • Pack-at-hand experience on repurchase
  • Whether the founder still edits anything
  • Whether the category has moved without you

Each item in the right column is the kind of thing that, left unattended for six to twelve quarters, produces an eventual crisis that the left column then tries to explain. “Why has category share fallen?” asks the board. The dashboard says it hasn’t — last week’s revenue was up. The dashboard is not lying. It is answering the question it has been built to answer. It was not built to answer the question that matters.

§IV The meetings it has ruined.

It is worth describing what happens inside a consumer brand where the dashboard has taken over. The weekly commercial meeting opens with a dashboard review. Decisions are made in reference to numbers from the prior week. The brand meeting, held less often, begins with a dashboard review — the same numbers, slightly older. The quarterly strategy offsite begins with a dashboard review. The annual planning session has a dashboard review as its opening slide.

Every meeting that could have been about decision becomes a meeting about reconciliation. Reconciliation of the numbers against targets set six months ago by people looking at the same dashboard. The meeting does not decide anything because the instrument does not ask any questions that require a decision. It asks questions that require an explanation. This is a different kind of meeting. It is also, over time, the kind of meeting from which no brand move emerges.

The specific casualty of these meetings is the founder’s editorial voice, covered at length in Lena Osei’s companion essay in this volume. A founder in a dashboard-led meeting is asked to defend numbers they did not personally produce against targets they did not personally set. After enough of those meetings, the founder stops proposing things the dashboard cannot measure, because proposing them is tiring. Taste, which is by definition the thing that gets proposed before the dashboard can see it, withers first.

The dashboard is the most polite instrument of self-sabotage consumer brands have ever installed. — Ned Halloran, field note

§V Re-tooling: when to look up.

We are not arguing for deleting the dashboard. We are arguing for restoring it to its original function: an instrument that measures, not one that decides. A few specific moves we have found useful with the brands we work with:

Separate the meetings. A weekly commercial meeting run off the dashboard is useful. The same meeting pretending to be a brand meeting is corrosive. Brand meetings should not open with a dashboard review. They should open with a question that the dashboard cannot answer — the state of the repeat rate, the state of the retailer margin, the state of the distinctive asset, the state of the founder’s editorial line. If the first slide of a brand meeting is a dashboard, the meeting is not a brand meeting.

Keep a slower dashboard. A second dashboard, updated quarterly not daily, of the metrics that take years to move and cost everything when they slip. Category share. Unaided recognition. Repeat rate by cohort. Retailer margin position relative to category. Pricing power (unit-priced premium over private label, trended). This dashboard has no place in operations. It is a board dashboard, and it should be the first thing the board sees, ahead of the operating dashboard.

Protect editorial decisions from attribution. Decisions about pack, voice, distinctive asset, category positioning — the taste decisions — should not be tracked against ROAS. They should be tracked against the slower dashboard, and even then only at 18-month intervals. A decision about whether the bottle shape is right is not a weekly decision. Measuring it weekly is how you end up with a committee-made bottle.

None of this is technically hard. What is hard is the political act of telling a team that the instrument they are most proud of is not the instrument that matters most. That telling is, ironically, a taste decision. It is made by the person with profit-and-loss responsibility. It cannot be dashboard-validated. It is precisely the kind of decision the dashboard trained the team to stop making.

Footnotes

  1. This observation is not original — versions of it appear in Byron Sharp, Les Binet & Peter Field, and in the ongoing IPA Effectiveness debate. What we add is field observation from twenty consumer-brand audits where the dashboard drift has quietly produced a specific category-share crisis. See Glow Group internal doc The Slow Dashboard, 2025.
N

Ned Halloran

Senior Partner, Commercial & Media · Glow Group

Ned leads commercial and media advisory at Glow Group. Twelve years at Accenture Song, four years running growth at an Australian beverage business through the growth-to-decline cycle that informs much of this writing. Believes every brand eventually gets the dashboard it deserves, and spends a professional lifetime trying to make sure it is the useful kind.

The Glow Report

Four volumes a year. One thesis.

Consumer brand research, essays, and field notes from Glow Group’s strategy and retail intelligence practices. 3,200 readers. One opinionated editorial line.