What you can see, and what you can't.
Measure is the function with the most software and the least clarity. It's the difference between a dashboard that scrolls and a dashboard that decides. What you count, what you compare, what you act on.
v 02 · live
What Measure means.
Measure is the work of making the operation visible. Not the work of building dashboards. Not the work of monitoring KPIs. The work of choosing what to count, what to ignore, what each number actually tells you about whether the operation is working, and acting on the answer.
Most CX teams measure too much. CSAT, NPS, FCR, AHT, ASA, abandonment, escalation rate, transfer rate, contact-per-customer, repeat contact rate, response time SLA, resolution time SLA. The dashboard scrolls. The leadership team nods. Nothing changes. The metrics are a ritual, not a tool.
That's the gap Measure names. Most metrics are inherited rather than chosen. Most dashboards are populated rather than designed. Most reviews discuss what changed without asking why, or whether the metric is even decision-grade. Measurement becomes performance, not signal.
"We had 23 metrics on the executive dashboard. I asked which one we'd act on if it moved. The team named four. The other 19 were just there because they always had been. We were measuring things we didn't have decisions to make about."
Haven's Measure module starts with the metric audit. Every metric tied to a decision it could trigger. Every metric without a decision retired. The dashboard becomes deliberate.
Decision-grade metrics get the attention. Performance metrics get the right altitude. The operation responds faster because the signals are cleaner.
The progression. Four levels.
The dashboard is full and nobody trusts it. Metrics are inherited from previous leaders or vendor defaults. Most reviews are theatre. Decisions happen on instinct, despite the data.
- Inherited metric set
- Vendor defaults
- Review theatre
- Decisions despite data
Some metrics are trusted. The leadership team has favourites. The rest of the dashboard is noise. Reviews surface obvious changes but rarely ask why.
- Trusted handful of metrics
- Most dashboard is noise
- Surface-level reviews
- Why questions rare
Every metric has a named decision. The dashboard is shorter. Reviews ask why before what. Metrics without decisions get retired without ceremony.
- Decision-tied metrics
- Shorter dashboard
- Why-first reviews
- Retirement is routine
The metric set evolves with the operation. New decisions trigger new metrics. Solved problems retire their metrics. The dashboard is a living instrument, not a museum.
- Living metric set
- New decisions create new metrics
- Solved problems retire metrics
- Dashboard as instrument
What Measure builds.
The metric audit
Every metric tied to a decision. Metrics without decisions retired. The dashboard becomes a tool, not a wall.
- Each metric mapped to a decision
- Orphan metrics retired
- Targets re-grounded in operating cost
- Owner & cadence per surviving metric
The decision register
A named list of operational decisions and the metrics that trigger each one. The bridge between data and action.
- Decisions named, not just metrics
- Trigger thresholds per decision
- Metric → decision mapping
- Action owners & escalation path
The review cadence
A weekly or fortnightly review structured around decisions, not numbers. Why before what. Action before report.
- Structured around decisions, not numbers
- Why before what, action before report
- Pre-read distributed in advance
- Decision log published every week