For CX leaders

Hired to fix CX. You inherited the agents, not the standard.

You came in to fix CX. What you walked into is two operations under one CSAT number: a team you calibrate, an AI you don't, no standard across both. The agents were here when you arrived. The standard wasn't.

Sound familiar?
"I inherited an AI agent the founder set up in a hurry and a team with a QA scorecard from last year. The two reports don't talk to each other. They don't even use the same words."
"We have a bot dashboard, a QA spreadsheet, and a CSAT chart on a slide. Three different reads, three different audiences, no single answer when the CEO asks how CX is performing."
"The AI vendor sends me weekly performance reports. They are optimistic. I have no way to verify that what they're measuring is what we should be measuring."
"When I want to change how the AI handles a refund, it routes to a vendor CSM. When I want to change how the team handles it, it routes to my team lead. Nobody is responsible for the connection between the two."

You have the expertise. The role hasn't been defined for the operation you actually inherited.

The CX leader job description still reads like 2022. A team, a helpdesk, a QA program, a CSAT target. What you walked into is two operations: a vendor-managed AI agent calibrated by people you don't manage, and a human team calibrated by you. You are being held to a single CSAT number, but the underlying work is split across two systems, two dashboards, and two sets of assumptions about what good looks like.

You know how to build the seven functions. You've done it before. The reframe is that the seven functions now have to hold across humans and AI both, not just across the team you hired. That's not a new playbook. It's the layer that makes one playbook hold across both.

Your first 90 days with Haven

From two operations to one read.

Week 1–2
Assess
The diagnostic scores your operation across all seven functions, against work product from the team and the AI. Where the team is calibrated and the AI is drifting. Where the AI is precise and the team is improvising. The shape of the gap between them.
Haven produces: maturity assessment, AI/team rigor delta, priority gap analysis, function-by-function read.
Week 3–4
Foundation
Write the standard down. The decisions that matter on a refund, an outage, an angry escalation. The team is calibrated to it. The AI is built on it. Your CSM at the AI vendor gets the same document you give your team lead.
Haven generates: the written standard, QA scorecard covering both sides, KPI framework that reconciles AI and team metrics into one read.
Week 5–8
Build
Extend across the remaining functions. Process documentation. Knowledge architecture. Contact taxonomy that the team and the AI both work from. Capacity modelling that splits across deflectable and complex. Each calibrated to where you actually are, not where you think you should be.
Haven tracks: progress per function, flags where AI calibration is drifting from team practice, and adapts recommendations as you make calls on what to ship.
Week 9–12
Operate
One read. Anomalies surface across both sides in real time. Improvement opportunities are surfaced from data. When the CEO asks how CX is performing, the answer reconciles the team and the AI in one number, with the working underneath it.
Haven becomes: your intelligence layer, monitoring, alerting, and connecting signals across all seven functions and both populations.
What Haven gives you that nothing else does

One read across humans and AI.

Every CX tool reads one side. Your QA platform reads humans. Your AI vendor reads its own bot. Your analytics dashboard reads outcomes. None of them easily surfaces the "why" across both populations, and none of them reads the handoff between them. Haven does.

Perform

A QA flag lands on the team for resolution accuracy. Haven traces it to a product knowledge gap and notices the AI is hitting the same wall on the same intent. One root cause routes to Enable, not two performance conversations.

Know

A new contact cluster emerges. Haven drafts a handling process for the team (Improve), a knowledge article (Enable), and a prompt update for the AI (Build). One signal, three drafts, and the team and the AI stay aligned by default.

Grow

Volume trends 40% above last year. Haven splits the forecast across deflectable and complex, sizes the human team for the complex tail, recommends the AI scope expansion for the deflectable, and adjusts metric baselines so next quarter's numbers compare like for like.

These aren't hypothetical. They're the connections you already make in your head, from experience, across tools that don't talk. Haven makes them visible, and pushes back when the data warrants. You decide what holds.

See your operational shape.

The diagnostic scores your operation across all seven functions. You already know where the gaps are. This makes it precise.

Start the diagnostic Join the waitlist