Function 04 / 07 · Measure

What you can see, and what you can't.

Measure is the function with the most software and the least clarity. It's the difference between a dashboard that scrolls and a dashboard that decides. What you count, what you compare, what you act on.

SAMPLE READING READING-04 · APR 26
Function 04 / 07 · Measure
You're defined on this one.
READING-04
v 02 · live
78/100
Composite
Defined
01
Reactive
02
Emerging
Now
Defined
04
Optimised
Next move Cut your dashboard in half. Half your metrics aren't decision-grade. Find them. Kill them.
02

What Measure means.

Measure is the work of making the operation visible. Not the work of building dashboards. Not the work of monitoring KPIs. The work of choosing what to count, what to ignore, what each number actually tells you about whether the operation is working, and acting on the answer.

Most CX teams measure too much. CSAT, NPS, FCR, AHT, ASA, abandonment, escalation rate, transfer rate, contact-per-customer, repeat contact rate, response time SLA, resolution time SLA. The dashboard scrolls. The leadership team nods. Nothing changes. The metrics are a ritual, not a tool.

That's the gap Measure names. Most metrics are inherited rather than chosen. Most dashboards are populated rather than designed. Most reviews discuss what changed without asking why, or whether the metric is even decision-grade. Measurement becomes performance, not signal.

"We had 23 metrics on the executive dashboard. I asked which one we'd act on if it moved. The team named four. The other 19 were just there because they always had been. We were measuring things we didn't have decisions to make about."

Director of CX Operations · Fintech · interviewed Feb 2026

Haven's Measure module starts with the metric audit. Every metric tied to a decision it could trigger. Every metric without a decision retired. The dashboard becomes deliberate.

Decision-grade metrics get the attention. Performance metrics get the right altitude. The operation responds faster because the signals are cleaner.

03

The progression. Four levels.

Level 01 You've passed
Reactive

The dashboard is full and nobody trusts it. Metrics are inherited from previous leaders or vendor defaults. Most reviews are theatre. Decisions happen on instinct, despite the data.

  • Inherited metric set
  • Vendor defaults
  • Review theatre
  • Decisions despite data
Level 02 You've passed
Emerging

Some metrics are trusted. The leadership team has favourites. The rest of the dashboard is noise. Reviews surface obvious changes but rarely ask why.

  • Trusted handful of metrics
  • Most dashboard is noise
  • Surface-level reviews
  • Why questions rare
Level 03 · Now You are here
Defined

Every metric has a named decision. The dashboard is shorter. Reviews ask why before what. Metrics without decisions get retired without ceremony.

  • Decision-tied metrics
  • Shorter dashboard
  • Why-first reviews
  • Retirement is routine
Level 04 2-3 months out
Optimised

The metric set evolves with the operation. New decisions trigger new metrics. Solved problems retire their metrics. The dashboard is a living instrument, not a museum.

  • Living metric set
  • New decisions create new metrics
  • Solved problems retire metrics
  • Dashboard as instrument
04

What Measure builds.

Artifact 01

The metric audit

Every metric tied to a decision. Metrics without decisions retired. The dashboard becomes a tool, not a wall.

  • Each metric mapped to a decision
  • Orphan metrics retired
  • Targets re-grounded in operating cost
  • Owner & cadence per surviving metric
~4 hours to first audit
Artifact 02

The decision register

A named list of operational decisions and the metrics that trigger each one. The bridge between data and action.

  • Decisions named, not just metrics
  • Trigger thresholds per decision
  • Metric → decision mapping
  • Action owners & escalation path
~2 hours to first version
Artifact 03

The review cadence

A weekly or fortnightly review structured around decisions, not numbers. Why before what. Action before report.

  • Structured around decisions, not numbers
  • Why before what, action before report
  • Pre-read distributed in advance
  • Decision log published every week
60 min · weekly