How your team actually shows up.
Perform is the function nobody names but everyone feels. It's the difference between a team that closes tickets and a team that builds trust. The bar that defines what "good" looks like, every day, on every conversation.
v 02 · live
What Perform means.
Perform is the work of showing up consistently across a team. Not the work of writing scripts. Not the work of monitoring dashboards. The work of agreeing what a good conversation sounds like, and calibrating to that bar every week.
Most CX teams have a vague sense of "good." Senior agents carry it in their heads. New hires absorb it through osmosis. Quality scoring catches the worst cases. Nobody can answer the question "what does great look like, today, on this customer's situation?" with anything specific.
That's the gap Perform names. The bar isn't written down. The rubric isn't shared. The calibration cadence is monthly at best. Agents fly on instinct. Managers grade on instinct. Quality drifts.
"I had a bar. I just couldn't tell my team what it was. I'd review a conversation and know within three lines if it was good or not. But if you asked me to write down the rule, I couldn't."
Haven's Perform module builds the shared rubric first. Three to five named dimensions. Four named levels per dimension. Live calibration on real conversations weekly. The bar becomes legible. New hires onboard against it. Senior agents teach against it. Quality stops drifting.
The work isn't fancy. It's a craft skill that's been buried under "QA software" for ten years. Haven names it, structures it, and shares it. That's the function.
The progression. Four levels.
"Good" lives in senior agents' heads. Quality is monitored after the fact. New hires learn through ride-alongs. Calibration is monthly or quarterly, often skipped.
- No written rubric
- No shared definition of good
- QA scoring exists but isn't trusted
- Senior agents are the bar
A bar exists, but it isn't shared. The lead has it. Some senior agents have it. New hires don't. Quality scoring catches obvious misses but doesn't lift the median.
- Lead carries the bar
- Coaching happens 1:1
- Calibration is informal
- New hires drift toward the floor, not the bar
The bar is named, owned, and calibrated weekly. A shared rubric. Three to five dimensions. Four levels each. Every agent knows what good looks like, today.
- Written rubric, version-controlled
- Weekly calibration session
- Shared with the team
- Owner named
The rubric evolves with the work. Calibration findings update the rubric. The rubric trains new hires automatically. Quality drift is detected before it shows up in CSAT.
- Self-improving rubric
- Auto-onboarding from rubric
- Drift detection
- Quality leads CSAT, not lags it
What Perform builds.
The shared rubric
Three to five dimensions. Four levels per dimension. Calibrated weekly. The single most leveraged artifact in the function.
- 3–5 dimensions, 4 levels each
- Calibrated examples per level
- Weekly recalibration cadence
- Linked to onboarding & Enable
The calibration ritual
A weekly 30-minute session where the team scores the same five conversations. Findings update the rubric. The rhythm that keeps the bar legible.
- Same 5 conversations, scored together
- Findings update the rubric live
- Disagreement log → coaching topics
- Whole team participates
The onboarding ladder
Six-week ramp where new hires move from Level 01 to Level 02 against the rubric, with named milestones. Reduces ramp time from 12 weeks to 6.
- Six-week ramp, Level 01 → 02
- Named milestones every two weeks
- Live rubric scoring against cohort
- Cuts ramp from 12 weeks to 6