ooligo
prompt

Twelve prompts for running pipeline reviews with Claude

Difficulty
beginner
Setup time
10min
For
revops
RevOps

Stack

A pack of twelve battle-tested prompts for the questions a pipeline review actually needs to answer. Paste a deal export, paste a prompt, get a structured answer in your team’s vocabulary. No agent, no API, no integrations. Just the prompts, ranked by how often you will reach for them.

What you’ll need

  • Claude.ai (Sonnet or Opus tier)
  • A CSV or markdown table of your open pipeline with deal name, amount, stage, days in stage, owner, and last activity
  • Your sales methodology cheat sheet, pasted once at the top of the project

Setup

  1. Create a Claude project. Name it “pipeline-review.” Drop the methodology doc in as project knowledge. Now every prompt in the pack starts with the right context.
  2. Import the prompts. Copy each of the twelve prompts as a saved prompt in the project. Tag them by use case: weekly review, board prep, deep-dive, escalation.
  3. Paste the pipeline. Most prompts expect the table inline. Two of them expect a single deal’s full history. The pack documents which is which.
  4. Run the most-used three first. “Find the at-risk commits,” “rank deals by close-date credibility,” and “draft the manager talk-track.” These three cover eighty percent of weekly use.

How it works

The twelve prompts are organized in three tiers. Tier one is portfolio-level: ranking, segmentation, slippage detection. Tier two is single-deal: stuck-deal diagnosis, MEDDPICC gap analysis, next-step proposal. Tier three is meeting-prep: manager talk-tracks, exec one-pagers, and the “what should I ask the rep” prompt that managers love.

Each prompt is structured the same way: role, context, input format, output format, an explicit list of things to avoid. The “things to avoid” section is the unsung hero. It bans corporate hedging, prevents Claude from inventing data not in the table, and forces specific dollar figures over vague language.

Watch-outs

  • Garbage in, garbage out. If your pipeline export is missing last-activity dates, the at-risk prompt will guess. Always include the activity column.
  • Methodology mismatch. A MEDDPICC prompt run on a BANT-trained team produces unfamiliar output. Swap the methodology doc, do not edit the prompts.
  • Hallucinated specifics. Claude will confidently invent a champion’s name if you let it. The prompts include “if a field is missing, say so” guards. Keep them.
  • Length creep. Output drifts long over time as you ask follow-ups. Reset the conversation between deals.

Stack

  • Claude — model layer, project knowledge, saved prompts
  • CSV export — pipeline data in a Claude-friendly shape
  • Methodology doc — the lens that makes the output useful