四半期ごとに注目すべき集計指標:CSM がレビューでオーバーライドしなかった、防御可能な分類済み根本原因を持つチャーンの割合。定常状態での目標は 70〜80% です。90% を超えると、分類法が寛容になりすぎています(カテゴリが多すぎ、証拠要件が緩すぎ)— Claude はバケットが広いためすべてにラベルを見つけます。60% 未満は、タイムラインデータが薄すぎるか、チームが実際に見るチャーンの形に分類法が一致していないことを示します。
診断用の反対指標:insufficient-evidence と no library match の返却割合。これらは失敗ではありません — スキルが正直であることです。増加傾向は、計装のギャップ(薄いタイムラインを持つアカウントが増える)またはライブラリのギャップ(チームがまだ防止プレイをコード化していないチャーンの形が増える)を意味します。どちらも意図的に対処すべき有用なシグナルです。
---
name: churn-analysis
description: Take a churned account and produce a structured root-cause analysis — triggering event, contributing factors, missed signals, taxonomy classification, and a prevention recommendation. Use after every confirmed churn (close-lost or non-renewal) so RevOps can aggregate root causes quarterly without recoding free-text CSM notes.
---
# Churn analysis
## When to invoke
Invoke this skill on a churned account once the close-lost or non-renewal event is recorded in the CRM and the CSM has had at least 24 hours to add their final notes. The output is a post-mortem document the CSM reviews, RevOps stores, and the leadership team aggregates.
Do NOT invoke this skill for:
- **Pre-emptive risk scoring on healthy accounts** — use a health-score model or `risk-scoring-skill` instead. This skill is post-mortem only; running it on a live account anchors the CSM on a churn narrative that has not happened yet.
- **Real-time churn prediction during a renewal cycle** — same reason. The two-pass timeline analysis here assumes the outcome is fixed; using it forward generates false-confidence signals.
- **Win/loss analysis on closed-lost new logos** — those need a different framing (deal narrative, competitor displacement, ICP fit) and a different taxonomy. Use a separate win-loss skill.
- **Single-event explanations** ("they churned because the champion left") — if you already know the cause and just want to record it, edit the CRM field directly. This skill is for cases where the CSM cannot cleanly attribute the churn yet.
## Inputs
- Required: `account_id` — the CRM identifier (HubSpot deal ID, Salesforce account ID, or equivalent)
- Required: `churn_date` — ISO-8601 date the contract ended or close-lost was recorded (e.g. `2026-04-15`)
- Required: `taxonomy` — slug pointing to the team's churn taxonomy file under `references/` (default: `churn-taxonomy`); the skill will refuse to assign a category outside this list
- Optional: `csm_notes` — free-text final notes from the account's CSM, pasted at invocation time
- Optional: `gong_call_ids` — comma-separated list of specific Gong call IDs to weight more heavily in the evidence pass
## Reference files
Always read the following from `references/` before generating the analysis. These define the team-specific vocabulary; without them, the output regresses to generic "champion left, product gap" answers that aggregate poorly across quarters.
- `references/1-churn-taxonomy.md` — the 5-10 root-cause categories the team has agreed on (replace template contents with your actual taxonomy before first run)
- `references/2-prevention-action-library.md` — the menu of prevention recommendations the skill is allowed to propose (replace template contents with your actual playbook entries)
- `references/3-sample-output.md` — the literal markdown shape the skill emits (do not modify; used to validate format)
## Method
Run these four steps in order. Steps 2 and 3 are the two passes — they must remain separate so evidence extraction is not contaminated by classification anchoring.
### 1. Build the 180-day timeline
Pull from CRM, CS platform, support system, and (optionally) Gong:
- Health-score changes (every recorded delta, with the value before and after)
- Contact changes (departures, new sponsors, role changes — use LinkedIn departure dates when CRM lags)
- Support cases (open dates, severity, time-to-resolution, customer-reported severity)
- Gong call summaries (sentiment shifts, pricing objections, competitor mentions)
- Product usage metrics (weekly active users, key-feature adoption, integration health)
- QBR attendance and outcomes
Order events chronologically. Anchor the timeline at `churn_date - 180 days` and end at `churn_date`. If fewer than 3 events exist within the 30 days immediately before `churn_date`, the skill returns the literal output `"insufficient data — fewer than 3 timeline events in the 30-day pre-churn window; manual CSM postmortem required"` and stops. This guard exists because short, sparse timelines invite hindsight-bias narratives that read confident but cannot survive the CSM's lived experience.
### 2. Evidence pass
A first Claude pass that ONLY extracts evidence. No classification, no prevention recommendation. For each timeline event flagged as inflection-worthy (a health drop greater than the team's configured delta, a sponsor change, a missed QBR, a severity-1 ticket), produce:
- The raw quote, ticket excerpt, or metric delta (verbatim — do not paraphrase)
- The date of the event
- Which timeline source it came from (CRM, Gainsight, Zendesk, Gong)
The output of this pass is a flat list of evidence rows. The skill stores it as an intermediate artifact and passes it to step 3 — it does not classify anything yet.
### 3. Classification pass
A second Claude pass that ONLY classifies. It receives the evidence list from step 2, the taxonomy from `references/1-churn-taxonomy.md`, and nothing else. Two-pass design is the explicit engineering choice: a single-pass model conflates "what happened" with "what category this belongs to," which biases the evidence selection toward whatever category the model already suspects. Forcing the classification pass to work from a frozen evidence list is the guard against that.
The classification pass must produce:
- One **primary** root-cause category (from the taxonomy, exactly — no novel labels)
- Up to two **contributing** root-cause categories (from the taxonomy, exactly)
- For each assigned category: which specific evidence rows support it (cite by date)
If no category passes a 3-evidence-row threshold, the primary category becomes `"insufficient-evidence"` and the analysis ends here. Padding to a category with 1-2 weak evidence rows is the failure mode this threshold guards against.
### 4. Prevention recommendation
Read `references/2-prevention-action-library.md`. Choose ONE prevention action from that library that, if it had been in place 60 days before the churn date, would have surfaced the primary root cause as a watchable signal. The skill is not allowed to invent a new prevention action — if no library entry fits, it returns `"no library match — prevention action requires human design"`. This forces the team to grow the library deliberately rather than letting Claude generate a different bespoke recommendation per churn that nobody can aggregate.
## Output format
Emit exactly this structure. The MDX page references this shape; the format must not drift.
```markdown
# Churn analysis — {Account name} ({account_id})
**Churn date:** {YYYY-MM-DD}
**Analysis date:** {YYYY-MM-DD}
**CSM:** {name}
**Contract value at churn:** {ARR}
## Triggering event
{One sentence naming the single event closest in time to the churn that the evidence supports as the proximate cause. Cite the evidence row.}
## Root cause classification
**Primary:** `{taxonomy-slug}` — {one-sentence rationale}
**Contributing:** `{taxonomy-slug}`, `{taxonomy-slug}` (or "none")
### Evidence supporting primary
- {YYYY-MM-DD} ({source}): "{verbatim quote or metric delta}"
- {YYYY-MM-DD} ({source}): "{verbatim quote or metric delta}"
- {YYYY-MM-DD} ({source}): "{verbatim quote or metric delta}"
### Evidence supporting each contributing factor
- **{contributing-slug-1}:**
- {YYYY-MM-DD} ({source}): "{verbatim quote or metric delta}"
- **{contributing-slug-2}:**
- {YYYY-MM-DD} ({source}): "{verbatim quote or metric delta}"
## Missed signals
- {Signal that, in hindsight, was visible but not acted on. Cite the date it became visible and the date action was finally taken (if ever).}
## Deviation from success plan
{One paragraph naming the specific commitments in the success plan that did not happen — onboarding milestones missed, integrations not shipped, executive sponsor not engaged. Reference success-plan dates.}
## Prevention recommendation
**Action:** {action slug from prevention-action-library}
**Trigger:** {the signal that would have fired this action 60 days earlier}
**Owner:** {role — CSM, RevOps, Product, etc.}
## CSM review
- [ ] CSM has read this analysis
- [ ] Factual errors corrected (track changes)
- [ ] Root-cause classification confirmed or overridden (CSM judgment wins)
- [ ] Prevention recommendation accepted, modified, or rejected (with reason)
```
## Watch-outs
- **Hindsight bias.** It is trivial to construct a clean narrative after the fact, especially with 180 days of timeline. Guard: the evidence pass (step 2) is structurally separated from the classification pass (step 3), and the classification pass refuses to assign a category without at least 3 evidence rows that explicitly cite dates and sources. If the CSM disagrees with the classification on review, the CSM's judgment wins and the override is recorded.
- **Taxonomy creep.** The temptation after every analysis is to add a new category that captures the unique flavor of this churn. Guard: the classification pass is constrained to the existing taxonomy file and refuses novel labels — the skill returns `insufficient-evidence` rather than minting a new category. New categories require a deliberate edit of `references/1-churn-taxonomy.md` outside the skill, which keeps growth slow and aggregation possible.
- **Champion-departure over-attribution.** "Champion left" is the easiest narrative and the most-overused category in unaided CSM postmortems. Guard: the `champion-departure` category in the taxonomy template requires a LinkedIn departure date OR a CRM contact-change record dated within 90 days of the churn — the classification pass will not assign it on a Gong-only signal ("they mentioned the new VP doesn't see the value").
- **Hallucinated attribution from sparse data.** Short timelines invite confident fiction. Guard: the 30-day-window / 3-event minimum at the end of step 1 short-circuits the analysis with `insufficient data` rather than producing a polished output that does not deserve to exist.
- **Prevention recommendation as creativity exercise.** Each bespoke recommendation makes the quarterly aggregate useless. Guard: step 4 chooses from a fixed library file (`references/2-prevention-action-library.md`) and refuses to invent. If no library entry fits, the skill says so and a human designs the new entry deliberately.
# Churn taxonomy — TEMPLATE
> Replace this template's contents with your team's actual taxonomy.
> The churn-analysis skill reads this file on every run. The classification
> pass is constrained to these slugs exactly — it will not invent new
> categories. Keep the list between 5 and 10 entries; more than that and
> the quarterly aggregate becomes noise.
## Categories
Each category has a slug (used by the skill), a one-sentence definition, and an evidence requirement that the classification pass enforces before it will assign the category.
### `product-gap`
The customer needed a capability the product does not ship and the gap was material to their use case.
Evidence requirement: at least one Gong quote, support ticket, or written request naming the missing capability AND a roadmap reference (committed, deferred, or rejected) that the CSM can cite.
### `champion-departure`
The economic buyer or primary champion left the customer organization and no replacement sponsor was established.
Evidence requirement: a LinkedIn departure date OR a CRM contact-change record dated within 90 days of the churn date. A Gong-only mention ("the new VP doesn't see value") is not sufficient — see watch-out in `SKILL.md`.
### `pricing`
The renewal price exceeded the customer's willingness or budget. Includes both list-price increases and seat-count escalation triggering a budget review.
Evidence requirement: a written pricing objection (Gong quote, email, or CRM note) AND a comparison to the prior contract value showing the delta.
### `consolidation`
The customer chose to consolidate onto an adjacent platform they already own (typically a suite vendor — HubSpot, Salesforce, Microsoft) rather than maintain a best-of-breed stack.
Evidence requirement: explicit naming of the consolidation target in customer communication. "They went with HubSpot" without a quote is not sufficient.
### `service-failure`
A specific incident or sustained service-quality issue (outages, support response times, repeated bugs in critical workflows) that the customer named as the reason for non-renewal.
Evidence requirement: linked support tickets or incident IDs AND a written customer reference to the incident as a churn driver.
### `adoption-failure`
The customer never reached the threshold of usage at which the product delivers value, regardless of CSM effort.
Evidence requirement: usage metrics showing weekly active users below the team's configured success-plan threshold for at least the final 60 days of the contract.
### `restructure`
The customer's business changed in a way that eliminated the use case (layoffs, division shutdown, acquisition, pivot).
Evidence requirement: a public announcement (press release, news article, LinkedIn post by an executive) of the structural change AND a CRM note linking the structural change to the non-renewal decision.
### `competitive-displacement`
The customer chose a direct competitor for the same use case (not consolidation onto a suite they already own).
Evidence requirement: explicit naming of the competitor in customer communication AND a comparison the customer ran or referenced.
## Adding a new category
Do not add categories inside the skill run. If a churn does not fit any existing slug, the skill returns `insufficient-evidence` for the primary category. The team reviews `insufficient-evidence` cases monthly and decides — out of band — whether a new category is justified. New categories require:
1. At least 3 historical churns that would have been classified under the new category, retroactively.
2. A definition narrow enough not to overlap with existing slugs.
3. An evidence requirement strict enough to prevent over-attribution.
## Last edited
{YYYY-MM-DD}
# Prevention action library — TEMPLATE
> Replace this template's contents with your team's actual prevention
> playbook. The churn-analysis skill chooses one entry from this file
> per analysis (step 4). It is not allowed to invent new actions — if
> no entry fits, the skill returns `no library match — prevention
> action requires human design` and a human extends the library
> deliberately. This keeps quarterly aggregates of "we recommended X
> for Y churns" meaningful.
## Format
Each entry has a slug (used by the skill), a one-sentence description, the trigger condition that should fire it, the owner role, and the churn category it most often pairs with.
## Entries
### `health-score-alert-multi-week-drop`
Fire a CSM alert when the health score drops by 15 points or more over any rolling 14-day window.
- Trigger: rolling 14-day delta ≤ -15
- Owner: CSM
- Pairs most with: `adoption-failure`, `service-failure`
### `sponsor-change-detection`
Cross-reference CRM contacts against LinkedIn weekly. Flag any departure of a contact tagged `economic-buyer` or `champion`.
- Trigger: LinkedIn departure of a contact tagged with one of the champion roles
- Owner: RevOps (automation), CSM (response)
- Pairs most with: `champion-departure`
### `quarterly-pricing-sensitivity-check`
In every QBR, ask the buyer's stated budget posture for the next contract cycle. Record verbatim in the CRM.
- Trigger: every QBR, no exceptions
- Owner: CSM
- Pairs most with: `pricing`
### `escalation-on-severity-1-pattern`
Auto-escalate to the VP of CS when an account opens 3 or more severity-1 tickets in any rolling 60-day window.
- Trigger: ≥ 3 sev-1 tickets in 60 days
- Owner: Support → VP CS
- Pairs most with: `service-failure`
### `success-plan-milestone-tracking`
Each success plan defines 3-5 milestones with dates. The CSM reviews status weekly and flags any milestone slipping by more than 14 days.
- Trigger: milestone slip > 14 days
- Owner: CSM
- Pairs most with: `adoption-failure`
### `consolidation-conversation-trigger`
When a customer publicly announces a strategic vendor consolidation initiative, the CSM books a check-in within 14 days to position the product against displacement.
- Trigger: public announcement (press release, earnings call, blog) of a consolidation initiative naming a competing platform
- Owner: CSM + AE
- Pairs most with: `consolidation`
### `usage-threshold-alert`
Fire an alert when weekly active users drops below the success-plan threshold for two consecutive weeks.
- Trigger: WAU < threshold for 2 weeks
- Owner: CSM
- Pairs most with: `adoption-failure`
### `restructure-watchlist`
Maintain a watchlist of accounts where the customer has announced layoffs, M&A, or strategic pivot. Re-validate the use case within 30 days of the announcement.
- Trigger: public announcement of restructure event
- Owner: CSM + AE
- Pairs most with: `restructure`
### `competitive-mention-alert`
Flag any Gong call where a named competitor appears in customer speech. Notify the AE and CSM jointly.
- Trigger: competitor named in customer speech on a Gong call
- Owner: AE + CSM
- Pairs most with: `competitive-displacement`
## Adding a new entry
The skill flags `no library match — prevention action requires human design` when no entry fits a churn. Review these monthly. Add a new entry only when:
1. The trigger is mechanically detectable (a metric, a date, a public event) — not a vibe.
2. The owner is a single named role.
3. The action is small enough to actually happen on the timeline between the trigger firing and the churn risk crystallizing.
## Last edited
{YYYY-MM-DD}
# Sample output
> This file shows the literal markdown shape the churn-analysis skill
> emits. It exists so the skill can validate its own output against a
> known structure and so reviewers can preview what they'll receive.
> Do not modify unless you also update the `Output format` section in
> `SKILL.md` — the two must stay in lockstep.
## Example: a fully populated analysis
```markdown
# Churn analysis — Acme Industries (HUB-5523-ACME)
**Churn date:** 2026-04-15
**Analysis date:** 2026-04-22
**CSM:** Jordan Lee
**Contract value at churn:** $84,000 ARR
## Triggering event
The renewal call on 2026-04-08 in which the new VP of Marketing
declined to commit to a renewal pending a stack consolidation review,
citing the parent company's directive to consolidate on HubSpot.
## Root cause classification
**Primary:** `consolidation` — Acme's parent company issued a
March 2026 directive to consolidate marketing tooling onto HubSpot
across all subsidiaries; Acme was a subsidiary affected by this.
**Contributing:** `champion-departure`, `adoption-failure`
### Evidence supporting primary
- 2026-03-12 (Gong, call ID 18221): "Look, the parent company is
asking us to justify every non-HubSpot tool. We have to make that
case and right now I can't."
- 2026-03-28 (CRM note, Sarah K.): "VP confirmed parent-company
consolidation initiative; Acme expected to comply by Q3."
- 2026-04-08 (Gong, call ID 18445): "I'm not signing the renewal
until I've heard back from corporate on what we're keeping."
### Evidence supporting each contributing factor
- **champion-departure:**
- 2026-02-14 (LinkedIn): Maria Chen (former Director of
Marketing Ops, our champion since 2024) departed for a new role.
- 2026-02-20 (CRM contact-change): Maria Chen marked inactive;
no replacement sponsor identified.
- **adoption-failure:**
- 2026-01 to 2026-04 (Gainsight): WAU dropped from 34 to 11 over
the final 90 days, below the success-plan threshold of 25.
## Missed signals
- The parent-company consolidation initiative was announced
publicly on 2026-02-03 in an investor call transcript. The CSM
did not see this until the renewal call on 2026-04-08 — a 64-day
gap during which the conversation could have shifted from
defending the renewal to positioning a smaller, integrated
footprint.
## Deviation from success plan
The success plan signed on 2025-10-01 committed Acme to ship two
integrations (Salesforce sync by 2025-12-15, lead-scoring webhook
by 2026-02-01). Neither shipped. The lead-scoring webhook was
deprioritized after Maria's departure on 2026-02-14, and no new
sponsor was found to advocate for it. WAU degradation tracks
directly against this missed milestone.
## Prevention recommendation
**Action:** `consolidation-conversation-trigger`
**Trigger:** the parent-company investor-call mention on 2026-02-03
should have fired a watchlist alert that booked a CSM check-in
within 14 days. That check-in would have given Acme 60 days to
position the product against displacement instead of 7.
**Owner:** CSM (Jordan Lee) + AE (Pat Morgan)
## CSM review
- [ ] CSM has read this analysis
- [ ] Factual errors corrected (track changes)
- [ ] Root-cause classification confirmed or overridden (CSM judgment wins)
- [ ] Prevention recommendation accepted, modified, or rejected (with reason)
```
## Example: insufficient-data short-circuit
When the timeline has fewer than 3 events in the 30 days before churn, the skill stops at step 1 and emits:
```markdown
# Churn analysis — Acme Industries (HUB-5523-ACME)
**Churn date:** 2026-04-15
**Analysis date:** 2026-04-22
**Status:** insufficient data — fewer than 3 timeline events in the
30-day pre-churn window; manual CSM postmortem required.
The skill cannot produce a defensible root-cause classification
from the available timeline. Recommended next step: the account's
CSM writes a free-text postmortem and the team reviews whether
instrumentation should be improved for accounts of this profile
(low-touch tier, light Gainsight coverage, or similar).
```
## Example: insufficient-evidence classification
When the evidence pass produces evidence rows but no category clears the 3-row threshold, the skill emits the full structure above with primary set to `insufficient-evidence` and a note that the analysis ends without a prevention recommendation pending CSM input.