A pack of structured prompts for Claude that turn a role rubric into a tiered set of interview questions: behavioral (probe past behavior under named conditions), situational (response to a hypothetical), technical-deep-dive (drill into a claimed competency), and reverse-questions (what to expect from the candidate, with what answers signal what). Every question is tagged with the rubric dimension it probes, the anchor it differentiates between, and the follow-up to ask if the answer is too rehearsed. Replaces the “we just wing it” interview with a question library the panel actually opens before the call.
The interview panel includes interviewers who don’t run interviews regularly — engineers, hiring managers, IC leads — and need to walk in with prepped questions calibrated to the rubric.
You want consistency across panelists. Every panelist asks variants of the same anchor questions, so the debrief compares notes on the same dimensions.
You’re calibrating a junior interviewer. The pack’s “follow-up if rehearsed” annotations make the deeper signal visible.
When NOT to use
Unstructured cultural-add interviews where the goal is rapport, not signal. Different conversation. The pack is for signal-collection rounds.
Live coding interviews. Different artifact (code-and-talk format). The take-home evaluator workflow handles artifact-evaluation; live-coding is its own workflow.
Rubrics that haven’t passed a fairness check — the pack’s prompts will produce questions that probe the rubric dimensions, including the bad ones. Run the rubric through the diversity slate auditor framing or the Boolean search builder fairness pre-flight first.
Questions you want to lock down for the year. The pack regenerates per-role per-rubric. If your firm needs frozen, reviewed questions for legal compliance (some industries do), use the pack as a starter and lock the output, not the prompts themselves.
Setup
Drop the bundle. Place apps/web/public/artifacts/interview-question-bank-prompt-pack/interview-question-bank-prompt-pack.md somewhere your interviewers can read (Notion, the team wiki, an internal Claude project’s knowledge files).
Author the role rubric. Same rubric the screen and reference workflows use. Without it, the prompts have nothing to probe.
Create a Claude project per role. Drop the rubric in as project knowledge. Save each prompt in the pack as a saved prompt within the project.
Generate the questions. Run each prompt against the rubric. Copy the questions to the panel’s interview-prep doc. Tag each question with the panelist who’ll ask it.
Review for tone and fit. The prompts produce competent questions. The hiring manager edits them for the firm’s voice and the role’s specifics.
What the pack contains
Twelve prompts, in three tiers.
Tier 1 — Behavioral (probe past behavior under named conditions)
Behavioral questions are the workhorses of structured interviewing. The pack generates questions in the STAR shape (Situation, Task, Action, Result) per rubric dimension, with a follow-up for each that drills past the rehearsed answer.
B1. Produce 3 behavioral questions per rubric dimension. Each tagged with the dimension and the rubric anchor (1-5) it discriminates between.
B2. For each behavioral question, produce one drill-down for the case where the answer is too rehearsed (the panelist can tell the candidate has prepped this exact story). The drill-down asks for a different example, a counter-factual, or a step the candidate skipped.
B3. Produce 3 behavioral questions that probe the negative — when the candidate failed at the dimension. Pre-emptively reduce the “I’m a perfectionist” type non-answer.
Tier 2 — Situational (response to a hypothetical)
Situational questions probe how the candidate would handle a scenario. Less reliable than behavioral but useful for senior-scope questions where the candidate may not have a directly-comparable past situation.
S1. Produce 2 situational scenarios per rubric dimension at the role’s level. Each scenario is calibrated to the level (Senior IC scope problems, not Staff scope; Manager scope problems, not Director scope).
S2. For each scenario, list the answer dimensions the panelist should listen for (specific decision criteria, what they ask before deciding, what they avoid).
Tier 3 — Technical / craft deep-dive
For roles where there’s a craft (engineering, design, sales methodology), this tier produces questions that drill into the candidate’s claimed competency.
T1. Given the rubric’s must_have skills, produce 5 deep-dive questions per skill. Each labeled “shallow” (sanity check the candidate has the skill at all) or “deep” (probe the edges of the skill).
T2. For each deep-dive question, list 3 follow-ups that the panelist asks if the candidate’s first answer is correct but surface-level.
T3. Produce 2 questions that surface a gap in the skill rather than confirm presence. (“Tell me about a time you had to use X but didn’t have Y.” Probes whether the candidate notices the limit.)
Tier 4 — Reverse questions (what the candidate asks back)
Strong candidates ask substantive questions. Weak candidates ask “what’s the culture like.” This tier helps the panelist read the candidate’s questions.
R1. Produce a list of 10 substantive questions a strong candidate might ask, grouped by what each question signals (the candidate is thinking about X, prefers Y, is looking for Z).
R2. Produce a list of 10 weak / generic questions and what each signals (candidate didn’t research, is anxious about basics, is fishing for a specific answer).
Cost reality
Per role’s question generation, on Claude Sonnet 4.6:
LLM tokens — typically 5-10k input (rubric + prompt + skill instructions) and 3-6k output (the generated question library) per prompt invocation. Total per role: roughly $0.30-0.60 if running all 12 prompts.
Interviewer time — the win. Hand-authoring a behavioral question library per role is 4-8 hours; the pack delivers a starter library in 30 minutes of prompt-and-edit.
Setup time — 15 minutes to set up the Claude project per role. Per-firm setup of the pack (saving prompts, integrating with team wiki) is a one-time 30-60 minute task.
Success metric
Track three things, monthly:
Cross-panelist question overlap — share of questions asked by ≥2 panelists in the same loop. Should be ≥40% on a calibrated pack (the rubric dimensions ARE the through-line); below 25% means panelists are improvising.
Debrief time — wall-clock from “last interview ends” to “decision recorded.” Should drop by ~30% because debriefs are anchored on the same dimensions.
Panelist confidence in their notes — qualitative; ask the panelists “did you walk in with a question library?” The honest answer at most firms is “no, we improvised” — the pack’s success metric is moving that to “yes, and it helped.”
vs alternatives
vs hand-authored question library. Hand-authoring is the right call for a small fast-iterating team where the rubric and the questions co-evolve in the founders’ heads. The pack earns its setup cost on teams that hire across multiple panelists per loop.
vs ATS-native question banks (Greenhouse Interview Plans, Ashby Interview Templates). ATS-native is the right call if your team lives in the ATS and wants questions surfaced in-context. Pick the pack if you want the question library version-controlled in your own repo and re-generatable as the rubric evolves.
vs ChatGPT-style “give me interview questions for senior engineer.” Generic chat returns generic questions. The pack is structurally different: every question is tagged with a rubric dimension, an anchor, and a follow-up.
vs no prep at all. Predictable failure mode: panelists ask different questions, the debrief compares apples and oranges, the decision drifts to whoever spoke first.
Watch-outs
Bias inheritance from rubric.Guard: the pack generates questions FROM the rubric. If the rubric has biased dimensions (“culture fit” without anchors, school-prestige scoring), the questions probe the bias. Audit the rubric upstream — see the diversity slate auditor.
Question rehearsal.Guard: the pack’s B2 prompt explicitly produces drill-downs for rehearsed answers. The drill-down asks for a different example or a counter-factual; it does not let the candidate re-run the prepped script.
Generic questions slipping through.Guard: every generated question must reference the rubric dimension and the anchor it discriminates between. Questions that don’t reference an anchor are flagged in the prompt’s output for the panelist to drop or rewrite.
Inconsistent question difficulty across panelists.Guard: the prompts are tagged with the rubric anchor (1-5) they’re calibrated to. Two panelists asking different questions about the same dimension are still calibrated to the same anchors.
Length blowup.Guard: the pack’s prompts cap output at “3 per dimension, 12 dimensions max” — a typical role’s library lands at ~50-80 questions, not 500. The hiring manager picks 8-15 to actually use per panel slot.
Outdated questions on stale rubrics.Guard: re-run the pack when the rubric changes (the pack is fast — 30 minutes is cheap). Old question libraries linked from interview-prep docs go stale silently otherwise.
Stack
The artifact bundle lives at apps/web/public/artifacts/interview-question-bank-prompt-pack/ and contains:
interview-question-bank-prompt-pack.md — the twelve prompts, paste-ready into Claude
Tools the workflow assumes you use: Claude (the model). The output drops into Notion, the team wiki, or an ATS interview-plan template.
# Interview Question Bank — Twelve Prompts for Claude
A pack of structured prompts for generating a tiered interview question library from a role rubric. Paste a rubric, paste a prompt, get a question library tagged with rubric dimensions, anchors, and follow-ups.
## How to use this pack
1. Create a Claude project named `interview-questions-<role-slug>` per role.
2. Drop the role rubric in as project knowledge. Every prompt below assumes the rubric is loaded and reads from it.
3. Save each prompt below as a saved prompt within the project, tagged by tier.
4. Run them in order. Most produce 30-90 second outputs at Sonnet 4.6 speed.
5. Review the outputs. Edit for the firm's voice. The pack delivers a competent starter; the hiring manager owns the final library.
## Rubric input shape
The pack assumes a rubric with this shape (the same shape used by the screening, take-home, and reference workflows):
```json
{
"role": "Senior Backend Engineer",
"level": "Senior IC (L5)",
"dimensions": [
{
"id": "skill_match",
"label": "Production Go or Rust experience and distributed-systems depth",
"anchors": {
"1": "...",
"2": "...",
"3": "...",
"4": "...",
"5": "..."
}
},
...
]
}
```
If the rubric doesn't load, the prompts halt and ask for the rubric file.
---
# Tier 1 — Behavioral
## B1. Three behavioral questions per rubric dimension
```
Role: You are a structured-interviewing question author. Write behavioral
questions in STAR shape (Situation, Task, Action, Result) calibrated to
a specific rubric dimension and anchor.
Context: The rubric is loaded as project knowledge. Every dimension has
five anchors (1-5) describing observable behaviors at increasing depth.
Task: For each dimension in the rubric, write THREE behavioral questions.
Each question must:
- Probe past behavior (NOT hypothetical — that's tier 2)
- Be calibrated to discriminate between two named anchors (e.g. "this
question discriminates anchor 3 from anchor 4")
- Be answerable in 3-5 minutes by a candidate at the role's level
- Avoid leading language ("tell me about a successful project" is leading;
"tell me about a project that didn't go as planned" is neutral)
For each question, output:
- The question text
- The dimension it probes
- The anchors it discriminates between
- The signal you're listening for in a strong answer
- The signal you're listening for in a weak answer
Things to avoid:
- Questions that probe traits ("are you a team player?") — probe behavior.
- Questions that have a single right answer — questions that probe the
candidate's framing of the problem.
- "Culture fit" questions without behavioral anchors.
- Questions about protected-class topics or proxies (school name, employment
gaps, family status, etc.).
Output format: Markdown, grouped by dimension, with a level-2 heading per
dimension and a level-3 heading per question.
```
## B2. Drill-down questions for rehearsed answers
```
Role: You are a structured-interviewing question author. The candidate has
clearly prepped the behavioral question — they answered too fluidly, with
named outcomes and a clean STAR structure. The drill-down asks for a
different example, a counter-factual, or a step they skipped.
Context: The rubric is loaded as project knowledge. Tier-1 behavioral
questions (B1 output) are loaded too.
Task: For each B1 question, produce ONE drill-down. Each drill-down must:
- Ask for a different example, the same dimension ("walk me through a
different time you had to do that")
- Or ask for a counter-factual ("what would you have done differently?")
- Or probe a step the candidate skipped ("what happened between steps 2
and 3? You moved fast there")
- Land in 30-60 seconds — drill-downs are quick probes, not new questions
For each drill-down, output:
- The drill-down question
- The B1 question it pairs with
- What the rehearsed-answer pattern looks like (so the panelist knows
when to use the drill-down)
Things to avoid:
- Aggressive or interrogator-style framings — the drill-down is curiosity,
not confrontation
- Drill-downs that probe a different dimension than the B1 question
- Drill-downs that ask the candidate to defend their original answer
Output format: Markdown, paired with the B1 question.
```
## B3. Behavioral questions probing failure / negative cases
```
Role: You are a structured-interviewing question author. Standard behavioral
questions probe success. Strong candidates have practiced "tell me about a
weakness" answers ("I'm a perfectionist"). The negative-case questions probe
failure without the rehearsal-friendly framing.
Context: Rubric loaded as project knowledge.
Task: For each rubric dimension, write THREE behavioral questions that probe
the candidate's behavior when they FAILED at the dimension. Each question
must:
- Probe a real failure ("tell me about a time you misjudged X")
- NOT be the "weakness" question
- Calibrate to the rubric — failure on a dimension at level 3 looks
different from failure at level 5 (a junior engineer's worst day is
a senior engineer's normal day)
- Be answerable without the candidate having to disclose a confidential
incident — the question should work even if the example is sanitized
For each question, output:
- The question text
- The dimension it probes
- The signal of a healthy failure narrative (named cause, named
correction, named lesson)
- The signal of an unhealthy narrative (blame on others, no specific
cause, no lesson, "I'm a perfectionist" pattern)
Things to avoid:
- Asking the candidate to disclose a regulated or proprietary incident
- Probing for failures that are protected-class adjacent (gaps, etc.)
- Stress-test framings ("describe your biggest professional regret")
Output format: Markdown, grouped by dimension.
```
---
# Tier 2 — Situational
## S1. Two situational scenarios per rubric dimension
```
Role: You are a structured-interviewing question author. Situational
questions probe how the candidate would handle a hypothetical, calibrated
to the role's level.
Context: Rubric loaded as project knowledge. Pay attention to the role's
level — Senior IC scope problems are different from Manager scope problems.
Task: For each rubric dimension, write TWO situational scenarios. Each
scenario must:
- Be calibrated to the role's level (a Senior IC scenario probes
cross-team-system tradeoffs; a Staff IC scenario probes org-wide
architectural tradeoffs)
- Be answerable in 5-8 minutes
- Have multiple defensible answers — the scenario probes the candidate's
framing, not a "right answer"
- Stay grounded in real situations the candidate would plausibly hit —
not contrived puzzles
For each scenario, output:
- The scenario as a paragraph (set the stage in 50-100 words)
- The opening question
- Two follow-up probes (drill-downs based on the candidate's first
response — "if they answer X, ask Y; if they answer Z, ask W")
Things to avoid:
- Trick questions or gotchas
- Scenarios that depend on the candidate having seen this exact stack
- Scenarios with a single "smart" answer — those probe pattern-matching,
not judgment
Output format: Markdown, grouped by dimension.
```
## S2. Listening dimensions per scenario
```
Role: You are a structured-interviewing notetaker primer. Panelists listen
for specific things in each scenario answer. Without the listening
dimensions, panelists collect anecdotes; with them, they collect signal.
Context: Rubric loaded as project knowledge. S1 scenarios loaded too.
Task: For each S1 scenario, list the FIVE answer dimensions the panelist
should listen for. Each dimension must:
- Be observable in the candidate's spoken answer (NOT something only
visible in their resume or in code)
- Map to a specific rubric anchor
For each scenario, output:
- "What strong answers do" (3 bullets — name the moves, the questions
asked back, the criteria stated)
- "What weak answers do" (3 bullets — name the failure patterns: jumping
to solution without clarifying, ignoring constraints, generic framing)
- "What to write down" (the concrete notes the panelist takes that anchor
the debrief later)
Output format: Markdown, paired with each S1 scenario.
```
---
# Tier 3 — Technical / craft deep-dive
## T1. Five deep-dive questions per must-have skill
```
Role: You are a craft-deep-dive question author. The role's rubric names
must-have skills. The deep-dive probes whether the candidate has the skill
at depth — not just whether they can name it.
Context: Rubric loaded as project knowledge. Focus on the must_have skills.
Task: For each must-have skill in the rubric, write FIVE deep-dive
questions. Label each as:
- SHALLOW: sanity check the candidate has the skill at all (e.g. "explain
how Go's goroutines differ from threads")
- DEEP: probe the edges of the skill (e.g. "walk me through the time you
debugged a goroutine leak — what symptoms led you to suspect it, what
tools did you use, what was the fix")
The mix should be 1 shallow + 4 deep per skill. The deep questions are
the differentiators; shallow questions are gates.
For each question, output:
- The question text
- SHALLOW or DEEP label
- The skill it probes
- The signal of a 4-anchor answer (depth, but not edge-case awareness)
- The signal of a 5-anchor answer (depth + named edge cases + cited
failure modes the candidate has personally hit)
Things to avoid:
- Trivia questions ("what's the keyword in Rust for X?") — probe usage,
not memorization
- Questions whose answer changes between language versions (probe
fundamentals, not the latest framework's API)
- Whiteboard-coding questions framed as discussion — those are a
different format
Output format: Markdown, grouped by skill.
```
## T2. Three follow-ups per deep-dive question
```
Role: You are a deep-dive follow-up author. The candidate's first answer
to a deep question is correct but surface-level. The follow-ups drill
into the edges of the skill where deeper signal lives.
Context: Rubric loaded. T1 questions loaded.
Task: For each T1 deep question, produce THREE follow-ups. Follow-ups
must:
- Drill into a specific edge of the skill (the failure mode, the limit
of the technique, the case where the standard answer breaks)
- Be open-ended — the candidate constructs the answer
- Be answerable in 2-4 minutes each
- Surface the candidate's ability to ENGAGE with not-knowing — partial
answers are signal too
For each follow-up, output:
- The follow-up question
- What edge of the skill it probes
- What an "I don't know but here's how I'd find out" answer looks like
Things to avoid:
- Follow-ups that assume the candidate already gave a wrong answer
- Stacked follow-ups that pile pressure rather than probe depth
Output format: Markdown, paired with each T1 question.
```
## T3. Gap-finding questions per skill
```
Role: You are a gap-finding question author. Most technical questions
probe whether the candidate HAS a skill. Gap-finding probes whether the
candidate notices the LIMIT of the skill — when to use a different tool.
Context: Rubric loaded.
Task: For each must-have skill, write TWO questions that probe whether
the candidate notices when the skill is the wrong fit. Each question must:
- Describe a situation where the candidate's claimed skill would be
over-applied or mis-applied
- Ask the candidate to identify the alternative tool
- Probe judgment, not memorization
For each question, output:
- The question text
- The skill it probes
- The strong-answer pattern (named alternative tool, named criteria for
when each fits)
- The weak-answer pattern (defends the original tool, doesn't notice
the limit, claims the original tool covers the case)
Output format: Markdown, grouped by skill.
```
---
# Tier 4 — Reverse questions
## R1. Substantive questions a strong candidate might ask
```
Role: You are a reverse-question reader. Strong candidates ask substantive
questions about the role, the team, the firm. The questions they ask
signal what they prioritize.
Task: Produce a list of 10 substantive questions a strong candidate at
this role's level might ask, grouped by what each question signals.
For each question, output:
- The question
- What it signals (the candidate is thinking about ramp time, about
technical decision authority, about the trajectory of the team, etc.)
- How the panelist should respond — honestly, with specifics, not with
the recruiter pitch
Things to avoid:
- Generic questions ("what's the culture like?") — those are tier R2
- Questions that fish for the panelist's commitment ("would you
recommend joining?") rather than probe substance
Output format: Markdown, grouped by signal.
```
## R2. Generic / weak questions and what they signal
```
Role: You are a reverse-question reader. Weak candidates ask generic
questions. Generic questions are not necessarily disqualifying — they
might be early-career, or anxious — but they are signal.
Task: Produce a list of 10 generic / weak questions and what each
signals.
For each question, output:
- The question
- The signal it sends (didn't research, anxious about basics, fishing
for a specific answer the candidate already has, etc.)
- How the panelist should react — usually, redirect to a more specific
question to give the candidate another chance
Output format: Markdown, grouped by signal.
```
## R0. Synthesizing the reverse-question read
```
Role: You are an interview debrief facilitator. The candidate asked
N questions across the loop. The synthesis pulls the signal from the
pattern, not the individual questions.
Context: Rubric loaded. R1 + R2 outputs loaded.
Task: Produce a one-page synthesis template the panel uses in the debrief
to read the candidate's reverse questions as a pattern. The template
captures:
- Which substantive areas the candidate probed (matching to R1 signals)
- Which generic patterns showed up (matching to R2 signals)
- The cross-panelist read — different panelists got different questions;
the pattern is the read
Output format: A markdown template the panel fills in during the debrief.
```