A Claude Skill that takes a job description, the company’s structured interviewing standards, and produces a complete interview loop design — interview stages, per-stage rubric dimensions, behavioral questions per dimension, scorecard templates, and recommended interviewer assignments. Replaces “we’ll figure out the interview process when we have a candidate” with a 30-minute setup that produces operational discipline.
What you’ll need
- Claude Code or Claude.ai with custom Skills enabled
- The role’s job description (output from the JD writer Skill or equivalent)
- The company’s interview standards — typical loop length, required interview types, interviewer-pool constraints
- The team’s structured interview rubric library
Setup
- Drop the Skill. Place
interview-loop-builder.skillinto your Claude Code skills directory. The Skill exposes one callable function:design_loop. - Configure the standards. Edit
interview_standards.yamlwith: maximum loop length (e.g., 4 stages), required interview types (recruiter screen, HM screen, on-site loop), interviewer-pool eligibility per role family. - Configure the rubric library. Place existing role-family rubrics in
rubrics/for the Skill to pattern-match against. - Test on a known role. Run on a role where the loop design already exists. Compare the Skill’s output to the existing loop; tune the config.
How it works
The Skill takes the JD and:
- Identifies the rubric dimensions. From the JD’s must-have-skills section, maps to the team’s rubric library — what’s being evaluated, at what depth.
- Designs the stage progression. Recruiter screen (fit + interest), HM screen (depth on top dimensions), on-site loop (full rubric coverage with one dimension per interviewer where possible).
- Generates per-stage interview content. Behavioral questions per rubric dimension, suggested probing follow-ups, scorecard with anchor descriptions per score level.
- Suggests interviewer assignments. Based on rubric-dimension-to-interviewer-skill matching from the eligible interviewer pool.
Output
A complete interview loop document with:
# Interview Loop: [Role] @ [Team]
## Stage 1: Recruiter Screen (30 min)
- Goal: confirm fit, interest, basic qualifications, comp alignment
- Key questions: [3-5 questions]
- Disqualifying signals: [list]
## Stage 2: Hiring Manager Screen (45 min)
- Goal: depth on top 2 rubric dimensions
- Rubric dimensions covered: [list]
- Behavioral questions: [4-6 questions with rationale]
- Scorecard template: [linked]
## Stage 3: On-Site Loop (4 interviews, 60 min each)
- Interview A — [Dimension 1, e.g. Technical Depth]
- Questions: [3-4 behavioral + 1 technical exercise]
- Suggested interviewer: [pool match]
- Interview B — [Dimension 2, e.g. Systems Design]
- ...
- Interview C — [Dimension 3, e.g. Collaboration]
- ...
- Interview D — [Dimension 4, e.g. Leadership / Influence]
- ...
## Stage 4: Debrief
- Format: independent scoring before group discussion
- Rubric: [linked]
- Decision criteria: [explicit thresholds]
Where it fits
Use this Skill at the start of every new role — alongside the JD writer. The output gets configured in Ashby or Greenhouse as the role’s interview structure; recruiters and hiring managers run candidates through the resulting structured loop.
The compounding benefit: structured loops produce better quality of hire than ad-hoc loops, with materially less per-role design overhead when AI handles the design.
Watch-outs
- Rubric quality determines loop quality. A vague rubric library produces vague interview questions. Invest in the rubric design before deploying the Skill at scale.
- Hiring manager review is mandatory. AI-designed loops are the starting point; the hiring manager validates the dimensions reflect the actual role priorities.
- Calibrate against your culture. Loop length, interview style, behavioral-vs-technical balance vary by company. Configure the standards file to match.
- Don’t skip interviewer training. The loop is operational, not magical; interviewers still need training on behavioral interviewing discipline and rubric application.
- Update with feedback. When interview intelligence reveals questions that aren’t producing useful signal, update the Skill’s config.