Haven at every scale

From one person to a hundred agents, humans and AI, one operation that compounds.

What your CX operation needs at 1 person is not what it needs at 100. The AI agent is in the operation at every stage now. Haven adapts the standard across humans and AI as you grow from one person to a hundred.

Solo founder Pre-seed to seed
You are the support team.
No agents. No tools. Just you and the inbox.

You answer every ticket between product work and investor calls. If you've turned on an AI agent, it runs against a system prompt you wrote in a hurry. Customer feedback is valuable but unstructured. The operation lives in your head, the AI's standard in a prompt nobody else reads, and the phrasing you use for the hardest tickets in a Claude conversation that took 100 revisions to land.

Enable
Haven documents your resolution patterns into a knowledge base as you work. When your first hire starts, the answers are already written down.
Measure
Haven sets up 5 metrics that matter at your stage. Not 30. Five. You can answer "how is support going?" in one sentence.
Know
Haven clusters your tickets into contact reasons automatically. You start seeing patterns: 40% of tickets are about the same three things. Product insight for free.
Grow
Haven tracks volume and handle time. It tells you when you need to hire: not when you are drowning, but six weeks before.

Haven gives a solo founder the operational backbone that experienced CX leaders spend their first 90 days building from scratch.

Under 10 agents Seed to Series A
The team exists. The system does not.
First hires, first growing pains, first time wondering if quality is slipping.

You hired three people. They handle complex tickets. The AI agent handles the rest. Everyone, human and AI, answers slightly differently. Onboarding was "shadow me, and match the AI's tone." The Zendesk is in default mode. The AI's prompt hasn't been touched since launch. CSAT moves and you can't tell which side caused it.

Perform
Haven builds your first QA scorecard, covering the team's interactions and the AI's both. Weekly reviews start. Within a month you can see who is improving and where the AI is drifting, based on data.
Enable
Haven generates a structured onboarding programme: week 1 through 4, with competency milestones. Your next hire ramps in half the time.
Build
Haven audits your Zendesk config and generates a setup guide: routing rules, SLA policies, key macros. The tool starts working for you.
Improve
Haven documents your first SOPs. When someone asks "how do we handle refunds?" the answer is written down, not in someone's head.

This is where most CX operations calcify. Informal habits become permanent processes before anyone designs them properly. Haven prevents that.

25 to 50 agents Series A to B
The operation needs to be an operation.
Dedicated CX leader. Multiple shifts. The gap between what is documented and what is practiced is showing.

You have a Head of CX now. QA exists for the team. The AI is read by a vendor dashboard you don't fully trust. Calibration between human reviewers drifts. The knowledge base has gaps everyone knows about. Reporting describes what happened but not why. The systems exist, they do not yet talk to each other, and the AI doesn't talk to any of them.

Perform
Haven detects calibration drift between reviewers automatically, and between the team's standard and the AI's behavior. When two reviewers score the same conversation 20 points apart, or the AI is consistently softer on refund tone than the team is, Haven flags it.
Measure
Haven adds diagnostic analytics: not just what happened, but why. CSAT dropped because tone scores fell on refund tickets handled by agents who joined after the policy change.
Know
Haven catches operational intelligence in real time and surfaces it when it crosses the threshold. Contact reason trends, emerging issues, sentiment shifts, product feedback. Structured, not anecdotal.
Grow
Haven models capacity a quarter ahead. Career levels, skills matrices, and hiring scorecards are structured. The team grows strategically, not reactively.

Haven connects the seven functions so the Head of CX can see the full picture. A QA issue surfaces a training gap. A volume spike triggers a hiring forecast. The signals flow.

100+ agents Series B to C
CX is a strategic function, not a cost centre.
Multiple teams. Multiple channels. The board wants to know how CX drives retention.

The operation is mature on the human side. QA runs, analytics exist, processes are documented. The AI side has its own dashboard and its own metrics. The board asks "how is CX performing?" and the answer is a 20-slide deck that takes a week to build, with a two-page bot performance section nobody reads alongside it. The operation works. It does not yet compound.

Measure
Haven generates automated narrative reporting for leadership, reconciling team metrics and AI metrics into one read. Not charts with commentary. A written operational narrative: what happened, why, what is changing, what to watch.
Perform
Auto-QA supplements human review. Haven flags the conversations that need attention, identifies coaching topics from patterns, and monitors consistency across a 20-person review team.
Know
Real-time operational intelligence. Sentiment shifts detected as they happen. Product feedback synthesised and prioritised. CX intelligence feeds business strategy, not just the support team.
Improve
Continuous improvement runs on data. Process compliance is measured. Bottlenecks are identified before they cause SLA breaches. Changes are tested, tracked, and rolled back if they fail.

At this scale, Haven is not building the operation. It is the intelligence layer that makes the operation self-improving. The VP of CX gets their week back.

Mature hybrid Compounded operation
AI handles volume. Humans handle complexity. The standard holds because Haven reads both.
Multiple shifts, multiple AI agents, multiple teams. The compounded operation needs the layer that holds across all of them.

AI agents handle 60–80% of inbound volume. Deflection rates look great. But who is QAing the AI? Who is checking whether "deflected" means "solved" or "gave up"? Who decides when a conversation should escalate from AI to human? Who trains the human agents on the 20% of conversations the AI cannot handle, which are, by definition, the hardest ones? The hybrid operation has been here for a while. The operating system for it is what's missing.

Perform
Haven QAs AI and human agents through the same framework. Consistent scoring. Calibrated standards. The AI gets better through the same feedback loop as the humans.
Build
Haven monitors AI agent effectiveness in real time. When the bot gives a technically correct but practically unhelpful answer 18% of the time, Haven catches the recontact pattern and flags it.
Enable
Human agents now handle only the hardest tickets. Haven identifies the skill gaps specific to high-complexity conversations and builds training around what the AI cannot do.
Measure
Haven separates AI metrics from human metrics, tracks true resolution (not just deflection), and shows leadership the real cost and quality picture of the hybrid operation.
Improve
Haven designs and monitors the handoff rules between AI and human. When should the bot escalate? When is it holding on too long? The escalation framework is data-driven, not guesswork.
Grow
Haven models the new workforce equation. How many humans do you need when AI handles 70% of volume? What roles change? What new skills matter? The capacity model is rebuilt for hybrid.

Every company deploying AI agents will discover the same thing: the bot handles tickets, but nobody is managing the operation around it. Haven is the operating system for the hybrid CX team.

Same system. Same standard. Every stage.

Haven does not make you outgrow it. It grows with you. The diagnostic tells you where you are today, across humans and AI both. Haven builds what comes next.

Start the diagnostic