Un Claude Skill que toma el panel completo de un candidato — la scorecard estructurada de cada entrevistador, transcripciones opcionales de BrightHire o Metaview, y el rubric del rol — y produce un brief de debrief con evidencia que el panel lee antes de la reunión sincrónica de debrief. El brief muestra la señal agregada por dimensión del rubric, áreas de acuerdo y desacuerdo, los puntos de decisión específicos que el panel necesita resolver, y preguntas de follow-up cuando la señal es delgada. Deliberadamente no emite una recomendación hire/no-hire — ese es el trabajo del panel, y tratarlo distinto pone el workflow dentro del régimen de alto riesgo del Anexo III del EU AI Act y de la mayoría de estatutos estatales de hiring-AI en EE.UU.
El efecto downstream: los debriefs se vuelven discusiones de 30 minutos sobre los desacuerdos reales en lugar de revisiones de 90 minutos sobre quién scoreó qué.
Cuándo usarlo
Ejecuta el skill cuando todo lo siguiente sea cierto:
Un loop de entrevistas completo concluyó para el candidato, con al menos 3 entrevistadores distintos cubriendo el rubric del rol.
Cada entrevistador envió una scorecard estructurada contra el rubric (las scorecards solo de free-text fallan el chequeo de input en el paso 1 del skill — ver apps/web/public/artifacts/interview-debrief-summary-skill/SKILL.md).
La reunión sincrónica de debrief está al menos a 2 horas de distancia. El brief está pensado para leerse con anticipación, no skimearse en la reunión.
El rol tiene un rubric estructurado que matchea el shape en apps/web/public/artifacts/interview-debrief-summary-skill/references/1-interview-rubric-template.md — cada dimensión tiene una tabla de anclas 1-5, y cada ancla tiene una descripción conductual.
Cuándo NO usarlo
El skill es la herramienta equivocada para varios trabajos adyacentes:
Auto-decidir hire/no-hire. El brief nunca emite una decisión final. Emite puntos de decisión para el panel. Auto-decidir dispara obligaciones del Anexo III del EU AI Act, el requisito de auditoría de sesgo de NYC LL 144, los requisitos de consentimiento de IL AIVI, y las reglas de notificación de MD HB 1202. El skill está construido para caer fuera de ese régimen; cablearlo a lógica de auto-decisión lo regresa adentro.
Enviar feedback a candidatos sin revisión del recruiter. El brief es interno. El texto de racional sintetizado usa fraseo de panel-interno que se vuelve evidencia en un reclamo de discriminación si se le muestra al candidato literal.
Reemplazar la conversación de debrief del panel. El brief es el insumo a la discusión, no un sustituto. “El brief muestra consenso, así que saltémonos el debrief” es el modo de falla contra el cual están diseñadas las reglas de references/3-disagreement-escalation.md — el consenso sin fricción es en sí mismo una preocupación de calibración.
Loops con un solo entrevistador. Bajo 3 entrevistadores, la síntesis de panel no es significativa. Usa un workflow de feedback de un solo entrevistador.
Transcripciones sin consentimiento. Las jurisdicciones de two-party-consent (CA, FL, IL, MD, MA, MT, NH, PA, WA) lo convierten en un freno duro. No pases transcripciones de BrightHire o Metaview a menos que el candidato haya consentido la grabación al inicio de la entrevista.
Sesiones de calibración sobre preguntas del rubric mismo. Cuando el panel está debatiendo el rubric (no al candidato), la síntesis por dimensión del brief es ruido. Corre la sesión de calibración por separado, y luego re-ejecuta el brief una vez que el rubric esté estable.
Setup
El bundle del artefacto vive en apps/web/public/artifacts/interview-debrief-summary-skill/. Contiene:
SKILL.md — la definición del Claude Skill con frontmatter, reglas de cuándo invocar, el método de seis pasos, el formato literal de output, y los pares watch-out / guardrail.
references/1-interview-rubric-template.md — el shape del rubric estructurado contra el cual el skill valida los inputs.
references/2-debrief-brief-format.md — el formato literal en Markdown en el que se escribe el brief.
references/3-disagreement-escalation.md — las reglas determinísticas de puntos de decisión (rango, veto del bar-raiser, divergencia HM-vs-panel, single-no-among-yes, hueco de cobertura, cluster sub-evidenciado).
Para poner en marcha el workflow:
Suelta el bundle en tu directorio de skills de Claude Code. Coloca interview-debrief-summary-skill/ bajo .claude/skills/ de tu proyecto (o la ubicación compartida de skills de tu equipo).
Reemplaza el template de rubric con tu rubric específico del rol. Edita references/1-interview-rubric-template.md por rol — cada dimensión necesita una tabla de anclas 1-5 con descripciones conductuales. Mantén el conteo de dimensiones entre 4 y 7. Bajo 4, el panel no puede triangular; arriba de 7, las scorecards se llenan como tarea pesada y la calidad de la evidencia se degrada.
Cablea el export de scorecards. Configura el export de tu ATS para que el skill pueda leer scorecards estructuradas. Ashby, Greenhouse, y Lever cada uno expone scorecard JSON vía API; el skill espera un array de {interviewer_id, interviewer_role, dimension_scores, evidence_notes} según el bloque Inputs en SKILL.md.
Prueba con un candidato conocido. Ejecuta sobre un candidato donde el panel ya hizo debrief y tomó una decisión. Compara los puntos de decisión del brief con los temas de discusión del debrief real. Si el brief muestra temas que el panel no discutió (o pierde temas que el panel sí discutió), afina el rubric — no el prompt — primero.
Setea el directorio de audit log. El skill agrega una línea por ejecución a audit/<YYYY-MM>.jsonl que contiene SHA del rubric, conteo de entrevistadores, conteo de puntos de decisión, y timestamp. Sin PII del candidato en la línea de audit. El log es lo que hace al workflow defendible bajo cuestionamiento de NYC LL 144 / EU AI Act.
Qué hace realmente el skill
El método de seis pasos corre en este orden, y el orden es load-bearing:
Valida el rubric y los inputs. Frena en rubrics solo de free-text, en menos de 3 entrevistadores, en dimensiones cubiertas por menos de 2 entrevistadores, en strings de evidence_notes bajo 20 caracteres. Frenar en lugar de advertir es intencional — un brief generado sobre inputs parciales se vuelve el ancla mental del panel.
Agrega por dimensión (determinístico). Computa media, rango, desviación estándar, y desglose por interviewer-role. El LLM no ve scorecards aún en este punto.
Identifica puntos de decisión (determinístico). Aplica las seis reglas en references/3-disagreement-escalation.md. Los puntos de decisión se basan en la señal estructurada, no en lo que el LLM piensa que se lee como desacuerdo.
Sintetiza por dimensión. El LLM produce una síntesis de dos a tres oraciones por dimensión, citando los strings de evidence_notesliteral entre comillas. Parafrasear es por donde entra el sesgo; el skill lo prohíbe. Cuando hay transcripciones disponibles, la síntesis cita el rango de timestamp. “Señal insuficiente — recomienda follow-up” es un output de primera clase, distinto a “sin recomendación” — la ausencia de evidencia en una dimensión es información que el panel necesita.
Chequeo de calibración. Compara la distribución de scores del candidato contra la media móvil de los últimos 5 debriefs del mismo rol. Los hallazgos aparecen en un bloque de “Nota de calibración” al final del brief, nunca inline por dimensión. Intención: enmarcar la conversación, no ajustar scores.
Escribe el brief y para. Escribe a briefs/<candidate_id>-<YYYYMMDD>.md. Agrega una línea al audit log. No llama a ningún endpoint de “send to candidate”, “post to Slack”, o “update ATS stage”. El brief es interno hasta que el recruiter y el hiring manager decidan qué hacer.
El formato de output es fijo (ver apps/web/public/artifacts/interview-debrief-summary-skill/references/2-debrief-brief-format.md) y intencionalmente no tiene sección de “Recomendación” — solo “Señal agregada”, “Síntesis por dimensión”, “Puntos de decisión para el panel”, “Preguntas de follow-up”, “Nota de calibración”, y “Apéndice — evidencia por entrevistador”. Un lector que intente leer una decisión de hire encuentra que la estructura lo empuja de vuelta a la discusión.
Realidad de costos
Un brief típico para un loop de 5 entrevistadores con 5 dimensiones de rubric y sin transcripciones adjuntas aterriza en aproximadamente 18-25k tokens de input (rubric + scorecards + evidence notes + los tres archivos de referencia) y 4-6k tokens de output. Con Claude Sonnet al pricing actual del API, eso es alrededor de $0.10-$0.15 por debrief. Con transcripciones adjuntas (transcripción típica de entrevista de 30 minutos: 7-10k tokens cada una), un loop de 5 entrevistadores empuja a $0.40-$0.70 por debrief.
La matemática del tiempo ahorrado es el número load-bearing: un debrief típico de 5 entrevistadores corre 60-90 minutos, de los cuales 30-50 minutos son el round-robin de “qué vio cada uno de nosotros” antes de que ocurra cualquier discusión real de decisión. El brief reemplaza el round-robin. Recruiters corriendo este skill en una de nuestras orgs de referencia reportan reuniones de debrief promediando 28 minutos (bajadas de 75 minutos) para loops donde el brief se distribuyó al menos 4 horas antes.
Eso son aproximadamente 45 minutos ahorrados por debrief, a través de (típicamente) 5 entrevistadores — alrededor de 3.75 horas-persona de tiempo de reunión por loop, a un costo bajo un dólar.
Métrica de éxito
La métrica a vigilar: mediana de longitud de la reunión de debrief en minutos calendario para loops donde el brief se distribuyó al menos 4 horas antes. Pull desde tu tooling de calendario (o desde el historial de scheduling de entrevistas de Ashby) y segmenta en cohortes “con brief” vs “sin brief”. Trayectoria objetivo: una mediana de 60-90 minutos en la cohorte sin brief cae a una mediana de 25-40 minutos en la cohorte con brief en las primeras 4-6 semanas.
Contra-métrica a vigilar en paralelo: tasa de regret post-hire a 6 meses en la cohorte con brief vs la cohorte sin brief. Si los debriefs se hicieron más rápidos pero la tasa de regret subió, el brief está dejando que los desacuerdos se promedien en lugar de mostrarlos — aprieta las reglas de escalación de desacuerdo en references/3-disagreement-escalation.md (típicamente: bajar el threshold de rango de 2 a 1.5, o agregar un trigger de “cualquier score bajo 3” para la dimensión relevante).
vs alternativas
Las features built-in de debrief de Ashby. Ashby agrega scorecards en una vista de dashboard y computa una media del panel. No produce una síntesis escrita, no muestra puntos de decisión por regla, y no diferencia “consenso en 4.0” de “cluster sub-evidenciado en 4.0”. Usa la vista de Ashby como la fuente de datos que el skill lee, no como sustituto del brief.
Agregación de Greenhouse Scorecards. Greenhouse rola scorecards en un tally de hire-or-no-hire por entrevistador más un agregado de recomendación del panel. El agregado es el modo de falla contra el cual está diseñado el skill — empuja a los paneles hacia score-aritmética-como-decisión y oscurece vetos del bar-raiser que terminan promediados en un “thumbs up” general.
Notas manuales del recruiter. Un recruiter leyendo cada scorecard y escribiendo un email de un párrafo con “temas para el debrief” es el status quo en la mayoría de los equipos. Captura la lectura del recruiter sobre el loop, lo cual es valioso, pero escala linealmente con el tiempo del recruiter y tiende a pattern-match hacia “lo que el HM probablemente quiere” a lo largo de muchas iteraciones. El skill es consistente entre recruiters y muestra desacuerdos estructurales (R3 — divergencia HM-vs-panel) que un recruiter escribiendo el brief él mismo raramente flaggea.
No hacer nada. El default — todos llegan al debrief con sus propias notas y la discusión corre round-robin. Funciona bien para equipos de bajo volumen (bajo 10 hires por trimestre). A volúmenes mayores, el round-robin es el cuello de botella y la calidad del debrief se degrada conforme se acumula la fatiga.
A tener en cuenta
Sesgo de una opinión fuerte (anchoring sobre la primera scorecard leída).Guardrail: el paso 2 agrega determinísticamente entre todos los entrevistadores antes de que el LLM vea cualquier scorecard individual. La regla R3 del paso 3 (divergencia HM-vs-panel) explícitamente muestra divergencia de single-strong-opinion como punto de decisión. La síntesis atribuye evidencia por interviewer-role (HM, Peer, XFN, Bar-raiser) en lugar de por nombre en los bloques por dimensión, lo que previene que el brief redondee hacia el entrevistador senior.
Falso consenso en dimensiones sub-evidenciadas.Guardrail: el chequeo de longitud mínima de evidence_notes en el paso 1 (bajo 20 chars falla). R6 (cluster sub-evidenciado) en el paso 3 muestra dimensiones donde 3+ scores se agrupan dentro de 1 punto pero el evidence note promedio está bajo 30 caracteres como RECOMMEND FOLLOW-UP, no como acuerdo. Este es el modo de falla silenciosa más común de los debriefs free-form.
Score-aritmética-como-decisión (tratar una media arriba de 3.5 como “hire”).Guardrail: el brief nunca emite una recomendación hire/no-hire. El formato de output intencionalmente no tiene un bloque de “Recomendación” — solo puntos de decisión y follow-ups. Un lector que intente leer una decisión encuentra que la estructura lo empuja de vuelta a la discusión.
Veto del bar-raiser silenciosamente sobrepasado.Guardrail: R2 en el paso 3 muestra cualquier score del bar-raiser 2+ debajo de la media del panel como punto de decisión automáticamente. El brief no puede generarse en un estado donde un dissent del bar-raiser se promedie — incluso si el resto del panel es unánime.
Patrones demográficos filtrándose en la síntesis.Guardrail: la síntesis cita los strings de evidence_notes literal en lugar de parafrasearlos, lo que previene que el LLM reescriba una observación en lenguaje que telegrafee una inferencia de clase protegida. Si un evidence_note recibido contiene proxies de clase protegida (origen del nombre, inferencia de edad, inferencia de estado parental, “culture fit” sin anclas conductuales), el skill frena en el paso 1 y muestra la nota ofensiva para reescritura antes de continuar.
Nota de calibración sobreinterpretada como veredicto.Guardrail: el bloque de calibración se agrega al final del brief, nunca inline por dimensión. El bloque usa el lenguaje “dentro de tolerancia” o “fuera de tolerancia — discutir” en lugar de sugerir una acción, y el chequeo de calibración se salta entero si hay menos de 5 debriefs previos del mismo rol cargados.
Stack
Proveedor de AI:Claude (Sonnet para el paso de síntesis; Opus para validación de rubric en first-run si el rubric es ambiguo).
Transcripciones opcionales:BrightHire o Metaview, con captura documentada de two-party-consent al inicio de la entrevista.
Dónde encaja: ver structured interviewing para la disciplina de diseño de rubric que este skill asume ya está en su lugar. El skill no puede rescatar un proceso de entrevistas no estructurado — solo puede sintetizar la señal que un proceso estructurado produce.
Encuadre de política: ver AI policy for legal teams para el manejo de enterprise-AI Tier-A requerido para inputs de datos del candidato (las transcripciones en particular son personal data sensible bajo GDPR y la mayoría de regímenes de privacidad estatales en EE.UU.).
---
name: interview-debrief-summary
description: Synthesize a panel's per-interviewer scorecards (and optional transcripts) into an evidence-grounded debrief brief. Surfaces aggregate signal per rubric dimension, areas of agreement and disagreement, a recommended decision-point for the panel, and follow-up questions when signal is thin. Always stops short of issuing a hire/no-hire decision — the panel decides.
---
# Interview debrief summary
## When to invoke
Invoke this skill once a candidate's full interview loop has concluded and all interviewers have submitted their scorecards. The output is a brief the panel reads *before* the synchronous debrief meeting, so the meeting discusses the actual disagreements rather than being a 90-minute round of note-comparison.
Trigger conditions:
- All scheduled interviews completed in the ATS ([Ashby](/en/tools/ashby/), [Greenhouse](/en/tools/greenhouse/), [Lever](/en/tools/lever/)).
- Every interviewer has submitted a structured scorecard against the role rubric (free-text-only scorecards fail the input check in step 1).
- The debrief meeting is at least 2 hours away (so the brief can be read in advance, not skimmed during the call).
Do NOT invoke for:
- **Auto-deciding hire/no-hire.** This skill never emits a final decision. It emits an aggregate signal and a recommended decision-point for the panel to resolve. Auto-deciding would put the workflow inside EU AI Act Annex III high-risk obligations and most US state hiring-AI statutes (NYC LL 144, IL AIVI, MD HB 1202).
- **Sending feedback to the candidate without recruiter review.** The brief is internal-only. Synthesized rationale text can include phrasing that is fine for an internal panel but actionable as evidence in a discrimination claim if surfaced to the candidate verbatim.
- **Replacing the panel-debrief conversation.** The brief is the input to the discussion, not a substitute. Skipping the debrief because "the brief already shows consensus" is a failure mode this skill is designed to surface against (see `references/3-disagreement-escalation.md`).
- **Single-interviewer loops.** If only one interviewer was scheduled, do not invoke — there is nothing to aggregate. Run a different workflow (single-interviewer feedback) instead.
- **Transcripts without consent.** Do not pass [BrightHire](/en/tools/brighthire/) or [Metaview](/en/tools/metaview/) transcripts unless the candidate consented to recording at interview start. Two-party-consent jurisdictions (CA, FL, IL, MD, MA, MT, NH, PA, WA) make this a hard halt, not a guideline.
## Inputs
- Required: `candidate_id` — the ATS-internal candidate ID.
- Required: `role_rubric` — path to a Markdown file under `references/` with the structured rubric (dimensions, 1-5 anchor scale, anchor descriptions per level). Without this the skill refuses to run; an unstructured rubric is the most common cause of vague synthesis.
- Required: `scorecards` — an array of per-interviewer scorecard objects. Each object: `interviewer_id`, `interviewer_role` (one of `hiring_manager`, `peer`, `cross_functional`, `bar_raiser`), `dimension_scores` (map of dimension name to integer 1-5), `evidence_notes` (map of dimension name to free-text observation, minimum 20 characters per dimension).
- Required: `candidate_metadata` — `role_title`, `level_band`, `loop_type` (one of `onsite`, `virtual_onsite`, `phone_screen_panel`).
- Optional: `transcripts` — array of paths to BrightHire / Metaview transcript exports. When present, the skill cites supporting moments per evidence claim. When absent, the brief notes "transcript-unsupported" on each dimension synthesis.
- Optional: `prior_debriefs` — paths to previous debrief briefs for the same role, used by the calibration check in step 5.
## Reference files
Always read the following from `references/` before generating the brief. They contain the rubric scaffolding, the literal output format, and the disagreement-escalation rules. Without them the brief is generic and the guards that keep the synthesis defensible do not run.
- `references/1-interview-rubric-template.md` — the structured rubric template the role rubric must conform to. Replace the template content with your role-specific rubric before running. The skill validates the passed-in `role_rubric` against this shape.
- `references/2-debrief-brief-format.md` — the literal output format, including the per-dimension synthesis layout and the decision-point-for-the-panel block. The skill writes against this format verbatim — do not freestyle.
- `references/3-disagreement-escalation.md` — rules for when a disagreement gets surfaced as a decision-point versus left as a note. Includes the bar-raiser-veto and the hiring-manager-vs-peer-divergence rules.
## Method
Run these six steps in order. Steps 1-3 are deterministic input validation and aggregation; only step 4 uses the LLM for synthesis. Running the LLM over an unvalidated, free-text-only rubric or over a single interviewer's scorecard produces output that is fast, confident, and unusable.
### 1. Validate the rubric and inputs
Open `role_rubric` and verify it conforms to the shape in `references/1-interview-rubric-template.md`: every dimension has a 1-5 anchor table, every anchor has a behavioral description, no dimension allows free-text scoring only. Halt if any check fails — surface the offending lines.
Then validate `scorecards`:
- At least 3 distinct interviewers (below 3, panel synthesis is not meaningful — surface a one-interviewer single-feedback note instead).
- Every dimension in the rubric is scored by at least 2 interviewers (gaps mean the loop did not cover the dimension; surface as a follow-up question rather than synthesizing absent signal).
- `evidence_notes` strings ≥ 20 characters on every score (free-text-only interviewers get bumped back to re-fill before the brief runs).
The choice to halt rather than warn is intentional: a brief generated on partial inputs becomes the panel's mental anchor, even when the generator notes the partial inputs. Halting forces the missing inputs to be filled before the discussion frame is set.
### 2. Aggregate per dimension (deterministic)
For each rubric dimension, compute:
- Mean score across interviewers.
- Min and max (the range — the disagreement signal).
- Standard deviation (used in step 4 to weight whether to surface as a decision-point).
- Per-interviewer-role breakdown (hiring_manager, peer, cross_functional, bar_raiser scores listed separately so structural disagreements surface).
Why structured rubric instead of free-form synthesis: a free-form synthesis loses the per-dimension comparability that lets the panel discuss specific evidence rather than overall impressions. Without per-dimension comparability, the debrief reverts to "everyone shares their gut feeling, loudest voice wins" — which is the failure mode this entire skill exists to prevent.
### 3. Identify decision-points (deterministic)
Apply the rules from `references/3-disagreement-escalation.md`:
- **Range ≥ 2 across interviewers on any single dimension** → surface as a decision-point.
- **Bar-raiser score ≥ 2 below the panel mean on any dimension** → surface as a decision-point regardless of range (bar-raiser veto semantics).
- **Hiring-manager score ≥ 2 above any other interviewer's score** → surface as a decision-point (single-strong-opinion guard).
- **No-hire from any one interviewer when the rest are hire** → surface as a decision-point with the dissenting evidence verbatim.
These rules run before the LLM synthesizes, so the decision-points are based on the structured signal, not on what the LLM thinks reads as disagreement. The synthesis in step 4 then explains the underlying disagreement; it does not pick which disagreements matter.
### 4. Synthesize per dimension
For each rubric dimension, the LLM produces:
- A two-to-three-sentence synthesis of what the panel saw, grounded in `evidence_notes` strings cited verbatim (no paraphrasing — paraphrasing is where bias enters).
- The evidence supporting the higher scores, attributed to interviewer role (not name — names go in the appendix).
- The evidence supporting the lower scores, attributed similarly.
- When transcripts are available, the timestamp range in the transcript where the supporting evidence appeared. Format: `BrightHire 14:22-15:08`. When transcripts are absent, write `transcript-unsupported` and do not infer.
Why "insufficient signal" is a first-class output, not a fallback: the absence of evidence for a dimension is itself information the panel needs. A dimension with two scores both based on 20-character evidence notes is not "consensus at 4.0"; it is "two interviewers guessed at 4.0". The brief writes "insufficient signal — recommend follow-up" rather than "consensus" in that case. This is different from "no recommendation", which would withhold all output and leave the panel without a structured starting point.
### 5. Calibration check
If `prior_debriefs` is provided, compare the score distribution against the previous 5 debriefs for the same role. Flag if:
- This candidate's mean is more than 1 standard deviation above the rolling mean (possible halo / overscoring).
- This candidate's mean is more than 1 standard deviation below the rolling mean for a dimension where the role has historically scored high (possible single-strong-negative-opinion drag).
Calibration findings appear as a "Calibration note" block at the end of the brief, never inline in the per-dimension synthesis. The intent is to give the panel a frame for the discussion, not to override the specific signal on this candidate.
### 6. Write the brief and stop
Write to `briefs/<candidate_id>-<YYYYMMDD>.md` per the format in `references/2-debrief-brief-format.md`. Append a single line to `audit/<YYYY-MM>.jsonl`: `run_id`, `candidate_id`, `role`, `rubric_sha256`, `interviewer_count`, `dimensions_count`, `decision_points_count`, `transcripts_attached` (boolean), `model_id`, `timestamp`. No candidate PII in the audit line.
Do not call any "send to candidate", "post to Slack channel", or "update ATS stage" endpoint. The brief is internal to the panel until the recruiter and hiring manager decide what to do with the synthesis.
## Output format
```markdown
# Interview debrief brief — {candidate_id} · {role_title} · {level_band}
Generated: {ISO timestamp} · Loop: {loop_type} · Interviewers: {n} ·
Rubric SHA: {short} · Transcripts: {yes|no}
## Aggregate signal
| Dimension | Mean | Range | HM | Peer | XFN | Bar-raiser |
|---|---|---|---|---|---|---|
| Technical depth | 4.2 | 4-5 | 5 | 4 | 4 | 4 |
| Systems design | 3.0 | 2-4 | 4 | 3 | 3 | 2 |
| Communication | 4.0 | 3-5 | 4 | 4 | 5 | 3 |
| Execution under pressure | 3.2 | 3-4 | 3 | 3 | 4 | 3 |
| Ownership | 4.5 | 4-5 | 5 | 4 | 5 | 4 |
## Per-dimension synthesis
### Technical depth — mean 4.2, range 4-5
The panel saw consistent depth on backend systems work. HM cites
"led the migration from a Postgres monolith to a sharded
Citus cluster, owned the cutover playbook end-to-end" (BrightHire
14:22-15:08). Peer and XFN cite the same migration with corroborating
detail. Bar-raiser scored 4 (not 5) on the basis that the candidate's
description of the rollback plan was "more reactive than I'd want at
this level" (transcript-unsupported, scorecard only). No decision-point
surfaced — disagreement is within tolerance.
### Systems design — mean 3.0, range 2-4 — DECISION-POINT
Range exceeds the threshold. HM scored 4 citing "drew the right
boundary between sync and async paths". Bar-raiser scored 2 citing
"could not articulate the trade-off between leader-follower and
multi-leader replication when prompted" (BrightHire 32:10-34:45). The
panel needs to resolve whether the bar-raiser's specific concern about
replication-topology fluency is load-bearing for the level, or whether
it is one weak moment in an otherwise strong design conversation.
### Communication — insufficient signal — RECOMMEND FOLLOW-UP
Two interviewers (HM, Peer) scored 4 with evidence notes under 30
characters. XFN scored 5 with no evidence note. Bar-raiser scored 3
with the note "felt scripted on the situational question, but I may
be reading too much in." This is not consensus at 4.0; it is
under-evidenced. Recommend follow-up question in the next round if
the candidate advances, or a 30-minute follow-up call with the
bar-raiser to walk through the specific moments.
[continues for each remaining dimension]
## Decision-points for the panel
1. **Systems design — replication-topology fluency.** Bar-raiser scored
2, HM scored 4. Resolve: is fluency on multi-leader vs
leader-follower trade-offs required at this level, or is the broader
design judgment sufficient?
2. **Communication — under-evidenced consensus.** Three scores cluster
at 4-5 but evidence notes are thin. Resolve: do we trust the cluster,
or do we ask for a follow-up signal?
3. **Bar-raiser dissent on technical depth.** Bar-raiser at 4 vs panel
mean of 4.2 — within tolerance, but the rollback-plan concern is
worth airing as a development area if hire.
## Follow-up questions if signal is thin
- For Communication: a 30-minute follow-up with the bar-raiser walking
through the situational-question moments.
- For Systems design: a take-home or whiteboard follow-up specifically
on replication topology trade-offs.
## Calibration note
This candidate's mean score across dimensions (3.78) is 0.4 standard
deviations above the rolling mean for the last 5 senior-backend
debriefs (3.51). Within tolerance — no calibration concern flagged.
## Appendix — per-interviewer evidence
[Per-interviewer scorecards, with names, in full. The synthesis above
attributes by role only; names live here so the panel can ask
specific interviewers to elaborate without ambiguity.]
```
## Watch-outs
- **Bias from one strong opinion (anchoring on the first scorecard).** *Guard:* step 2 aggregates deterministically across all interviewers before the LLM sees any single scorecard, and step 3's hiring-manager-vs-peer-divergence rule explicitly surfaces single-strong-opinion divergence as a decision-point. The LLM does not "round up" toward the senior interviewer's score.
- **False consensus on under-evidenced dimensions.** *Guard:* `evidence_notes` minimum-length check in step 1 (≥ 20 chars), and step 4's "insufficient signal" first-class output. A dimension where three interviewers scored 4 with one-word evidence notes is written as "insufficient signal — recommend follow-up", not as "consensus at 4.0". This is the most common silent failure of free-form debriefs.
- **Score-arithmetic-as-decision (treating mean ≥ 3.5 as "hire").** *Guard:* the brief never emits a hire/no-hire recommendation. It emits decision-points for the panel. The output format intentionally has no "Recommendation" block — only "Decision-points for the panel" and "Follow-up questions". A reader who tries to read off a decision finds the structure pushes them back to discussion.
- **Bar-raiser veto silently overridden.** *Guard:* step 3's rule surfaces any bar-raiser score ≥ 2 below the panel mean as a decision-point automatically. The brief cannot be generated in a state where a bar-raiser dissent is averaged away.
- **Demographic patterns leaking into synthesis.** *Guard:* the synthesis cites `evidence_notes` strings verbatim rather than paraphrasing, which prevents the LLM from rewriting an observation into language that telegraphs a protected-class inference. If a passed-in `evidence_note` itself contains protected-class proxies, the skill halts in step 1 and surfaces the offending note for re-write before continuing.
- **Calibration note overinterpreted as a verdict.** *Guard:* the calibration block is appended at the end of the brief, never inline per dimension. The intent is to frame the conversation, not adjust individual scores. The brief explicitly says "within tolerance" or "outside tolerance — discuss" rather than suggesting an action.
# Interview rubric — TEMPLATE
> Replace this template's contents with your role-specific rubric.
> The interview-debrief-summary skill validates the passed-in rubric
> against the shape below in step 1 and halts if any dimension is
> missing the required structure. Do not loosen the structure to make
> a vague rubric pass — fix the rubric instead.
## Role metadata
- **Role title**: {e.g. Senior Backend Engineer}
- **Level band**: {e.g. L5 / Senior IC}
- **EEOC job category**: {e.g. Professionals — required for audit log}
- **Last edited**: {YYYY-MM-DD}
- **Owner**: {hiring manager name + recruiter name}
## Score scale
All dimensions use the same 1-5 scale. Anchors below are the *minimum* behavior required at each level; a candidate scoring above the level exceeds the anchor in addition to meeting it.
| Score | Label | Meaning |
|---|---|---|
| 1 | Strong no | Misses the bar by a wide margin; would block the team |
| 2 | No | Below the bar with no clear path to growing into it in 6mo |
| 3 | Mixed | At the bar with a meaningful gap; viable with development plan |
| 4 | Yes | At or above the bar; ready to contribute on day one |
| 5 | Strong yes | Above the bar with capacity to lift the team |
## Dimensions
Each dimension below MUST have a 1-5 anchor table with behavioral descriptions. Free-text-only anchors fail the rubric validation in step 1.
### Dimension 1 — {e.g. Technical depth}
What this dimension tests: {one sentence}
| Score | Behavioral anchor |
|---|---|
| 1 | {behavior — e.g. "Cannot describe systems they have built without prompting"} |
| 2 | {behavior} |
| 3 | {behavior} |
| 4 | {behavior} |
| 5 | {behavior — e.g. "Walks through trade-offs at multiple levels of the stack unprompted, with concrete examples"} |
Common evidence sources: take-home review, system-design conversation, deep-dive on past projects.
### Dimension 2 — {e.g. Systems design}
What this dimension tests: {one sentence}
| Score | Behavioral anchor |
|---|---|
| 1 | {behavior} |
| 2 | {behavior} |
| 3 | {behavior} |
| 4 | {behavior} |
| 5 | {behavior} |
Common evidence sources: system-design interview, architecture review.
### Dimension 3 — {e.g. Communication}
What this dimension tests: {one sentence}
| Score | Behavioral anchor |
|---|---|
| 1 | {behavior} |
| 2 | {behavior} |
| 3 | {behavior} |
| 4 | {behavior} |
| 5 | {behavior} |
Common evidence sources: every interview; hiring-manager screen explicitly tests structured explanation.
### Dimension 4 — {e.g. Execution under pressure}
What this dimension tests: {one sentence}
| Score | Behavioral anchor |
|---|---|
| 1 | {behavior} |
| 2 | {behavior} |
| 3 | {behavior} |
| 4 | {behavior} |
| 5 | {behavior} |
### Dimension 5 — {e.g. Ownership}
What this dimension tests: {one sentence}
| Score | Behavioral anchor |
|---|---|
| 1 | {behavior} |
| 2 | {behavior} |
| 3 | {behavior} |
| 4 | {behavior} |
| 5 | {behavior} |
> Add or remove dimensions to match the role. Keep the count between
> 4 and 7. Below 4, the panel cannot triangulate; above 7, scorecards
> get filled out as a chore and evidence quality degrades.
## Interviewer-role assignment
Per loop, every dimension should be covered by at least 2 interviewers of different `interviewer_role`s (the skill validates this in step 1). Suggested coverage matrix:
| Dimension | Hiring manager | Peer | Cross-functional | Bar-raiser |
|---|---|---|---|---|
| Technical depth | yes | yes | — | yes |
| Systems design | — | yes | — | yes |
| Communication | yes | yes | yes | yes |
| Execution under pressure | yes | — | yes | — |
| Ownership | yes | yes | — | yes |
## Disqualifiers
Single signals that result in a no-hire regardless of other dimensions. Keep this list short and mechanical. The skill flags these prominently in the brief if any interviewer notes them.
- {e.g. "Misrepresented past role title or scope" — backed by reference check}
- {e.g. "Hostile or dismissive toward an interviewer or coordinator" — noted by 2+ interviewers}
## Things this rubric does NOT score
These get explicitly excluded so they cannot creep back as "intuition":
- "Culture fit" without behavioral anchors — replace with the specific behaviors you mean.
- School prestige as a standalone signal — appears in pattern-match dimensions only.
- Tenure pattern interpretation that penalizes parental leave or health gaps.
- Any inference from photo, name origin, or pronoun usage.
# Debrief brief — output format
> The interview-debrief-summary skill writes against this format
> verbatim in step 6. Do not freestyle the structure — the panel
> reads many of these and consistency is what makes them scannable.
> Replace `{placeholders}` with real values; keep the section
> headings and ordering exactly as below.
## Required structure
Every brief MUST contain these sections in this order:
1. Title line
2. Header (generated, loop type, interviewers, rubric SHA, transcripts)
3. Aggregate signal table
4. Per-dimension synthesis (one block per rubric dimension)
5. Decision-points for the panel
6. Follow-up questions if signal is thin
7. Calibration note (always present; says "no prior data" if first run)
8. Appendix — per-interviewer evidence
There is intentionally NO "Recommendation" section. The brief never emits a hire/no-hire. The panel resolves the decision-points and makes the call in the synchronous debrief.
## Template
```markdown
# Interview debrief brief — {candidate_id} · {role_title} · {level_band}
Generated: {ISO timestamp} · Loop: {loop_type} · Interviewers: {n} ·
Rubric SHA: {short} · Transcripts: {yes|no}
## Aggregate signal
| Dimension | Mean | Range | HM | Peer | XFN | Bar-raiser |
|---|---|---|---|---|---|---|
| {dimension 1} | {mean} | {min}-{max} | {score} | {score} | {score} | {score} |
| {dimension 2} | ... | ... | ... | ... | ... | ... |
Cells with no score: dash (`—`), not zero. The skill never imputes
a missing score.
## Per-dimension synthesis
For each dimension, one block in this exact shape:
### {Dimension name} — mean {x.x}, range {min}-{max}{ — DECISION-POINT|RECOMMEND FOLLOW-UP}{empty if neither}
{Two-to-three-sentence synthesis. Cite `evidence_notes` strings
verbatim in quotation marks. Attribute to interviewer ROLE, not
name (HM, Peer, XFN, Bar-raiser). When transcripts are available,
include the timestamp range as `(BrightHire 14:22-15:08)` after the
quoted evidence. When transcripts are absent, write
`transcript-unsupported` after the quoted evidence and do not infer.}
{Optional second paragraph: the disagreement, if a decision-point.
Names the specific resolved-vs-unresolved tension for the panel to
discuss. Two sentences max.}
The "DECISION-POINT" suffix is added when step 3's escalation rules
fire. The "RECOMMEND FOLLOW-UP" suffix is added when the synthesis
in step 4 marks the dimension as insufficient signal. Neither suffix
when the dimension is consensus-with-evidence.
## Decision-points for the panel
Numbered list. Each item names the dimension, the divergence, and the
specific question to resolve. Three to five items in a typical brief.
If there are zero decision-points, write the literal sentence:
> No decision-points surfaced. The panel may want to confirm that the
> consensus reflects shared evidence rather than shared assumptions
> before treating the loop as resolved.
(That fallback is intentional — frictionless consensus is itself a
calibration concern.)
## Follow-up questions if signal is thin
Bulleted list. Each bullet is a specific follow-up the panel could
run before deciding: a 30-minute follow-up call, a take-home, a
reference check on a specific dimension, a re-interview by a
specific role. Empty list is acceptable; write "None — signal is
sufficient on every dimension" if so.
## Calibration note
One paragraph. Compares this candidate's per-dimension score
distribution against the rolling mean from `prior_debriefs` (last 5
debriefs for the same role). Format:
> This candidate's mean score across dimensions ({x.xx}) is {n.n}
> standard deviations {above|below} the rolling mean for the last
> {k} {role_title} debriefs ({y.yy}). {Within tolerance — no
> calibration concern flagged. | Outside tolerance — discuss whether
> the rubric is being applied consistently this loop.}
If `prior_debriefs` was not provided: write "No prior debriefs
loaded. Calibration check skipped — recommend running with
`prior_debriefs` populated once 5+ same-role debriefs exist."
## Appendix — per-interviewer evidence
Per-interviewer scorecards in full, with names. The synthesis above
attributes by role only; names live here so the panel can ask
specific interviewers to elaborate without ambiguity.
For each interviewer:
### {Interviewer name} — {interviewer_role} — overall {summary score if scorecard provides one, else dash}
| Dimension | Score | Evidence |
|---|---|---|
| {dimension 1} | {score} | {evidence_notes string verbatim} |
| {dimension 2} | ... | ... |
```
## Formatting rules
- Soft-wrap prose paragraphs in the synthesis blocks. Tables, headings, and block quotes are preserved verbatim.
- Use `—` (em-dash) for missing values in the aggregate-signal table.
- Quote evidence notes verbatim in quotation marks. Do not paraphrase. Paraphrasing is where bias and false certainty enter.
- Interviewer-role labels in the synthesis: `HM`, `Peer`, `XFN`, `Bar-raiser`. Always exactly these strings — the brief is sometimes parsed downstream for analytics.
- Timestamp citations: `(Tool TimecodeStart-TimecodeEnd)`. The tool name is `BrightHire` or `Metaview`. Timecodes are `mm:ss-mm:ss`.
- File location: `briefs/<candidate_id>-<YYYYMMDD>.md`.
## What this format intentionally does NOT include
- A "Recommendation" or "Decision" section.
- A confidence score.
- A summary "lean" toward hire or no-hire.
- An overall pass/fail at the top of the brief.
These omissions are load-bearing. Every one of them, in earlier iterations, became the one thing the panel read — turning the brief into the decision and the meeting into a rubber stamp.
# Disagreement escalation rules
> The interview-debrief-summary skill applies these rules in step 3
> (deterministic decision-point identification) before the LLM runs
> the per-dimension synthesis. The rules are deliberately strict —
> the cost of surfacing a non-disagreement as a decision-point is
> 2 minutes of panel discussion; the cost of averaging a real
> disagreement away is a regretted hire or a missed strong candidate.
## Rules
Apply each rule independently. A dimension that triggers any rule is flagged with the `DECISION-POINT` suffix in the synthesis output and appears in the "Decision-points for the panel" section.
### R1. Range-on-dimension
**Trigger:** any single dimension where `max(scores) - min(scores) >= 2`.
**Rationale:** a 2-point spread on a 1-5 scale crosses a meaningful behavioral anchor (e.g. "below the bar" to "at the bar"). Two interviewers seeing the same candidate that differently is a calibration issue or an evidence-asymmetry issue — both worth discussing.
**Example:** Systems design — HM 4, Peer 3, Bar-raiser 2. Range = 2. Surface as decision-point.
### R2. Bar-raiser-veto
**Trigger:** `bar_raiser_score <= panel_mean - 2` on any dimension, where `panel_mean` excludes the bar-raiser.
**Rationale:** the bar-raiser role exists to apply a level-consistent standard across many loops. A bar-raiser scoring 2+ below the rest of the panel on a dimension means the panel is calibrated to a different standard than the one the bar-raiser is holding. That gap is load-bearing — not a tie-breaker, but a calibration discussion.
**Example:** Technical depth — HM 5, Peer 4, XFN 4 (panel mean 4.33), Bar-raiser 2. Surface as decision-point.
**Edge case:** if there is no bar-raiser in the loop, this rule does not fire. The brief notes "no bar-raiser in loop" in the calibration block.
### R3. Hiring-manager-vs-panel divergence
**Trigger:** `hiring_manager_score >= max(other_scores) + 2` on any dimension.
**Rationale:** the hiring manager is the most consequential single voice in most hiring decisions and the most prone to single-strong- opinion bias. A hiring manager scoring 2+ above every other interviewer is the pattern that produces "we hired them because the HM loved them and nobody pushed back."
**Example:** Communication — HM 5, Peer 3, XFN 3, Bar-raiser 3. HM is 2 above max of others. Surface as decision-point.
**Note:** this rule fires *upward* (HM higher than panel), not downward. A hiring manager scoring well below the panel typically self-resolves in the meeting; the upward case is the one that needs structural escalation.
### R4. Single-no-among-yes
**Trigger:** any single interviewer's overall scorecard recommendation is `no_hire` or `strong_no` while every other interviewer recommends `hire` or `strong_hire`.
**Rationale:** a single dissenting no-hire is the highest-information signal in a debrief — either the dissenter saw something the panel missed (in which case the hire is at risk) or the dissenter has a miscalibration on this candidate (in which case it is a coaching opportunity for the dissenter). Both outcomes require explicit discussion. Averaging the dissent away is the failure mode.
**Example:** HM hire, Peer strong_hire, XFN hire, Bar-raiser no_hire. Surface as decision-point with the bar-raiser's evidence verbatim.
### R5. Coverage-gap
**Trigger:** any rubric dimension with fewer than 2 interviewer scores.
**Rationale:** a dimension scored by only one interviewer is not a panel signal; it is one person's read. The brief surfaces the gap as a follow-up question rather than as a decision-point — the recommended action is to gather more signal, not to debate the existing one.
**Output location:** appears in "Follow-up questions if signal is thin", not in "Decision-points for the panel".
### R6. Under-evidenced cluster
**Trigger:** a dimension where 3+ interviewers' scores cluster within 1 point AND the mean evidence-note length across those interviewers is below 30 characters.
**Rationale:** a tight cluster of scores backed by one-sentence evidence is "consensus" only in the same sense that "everyone agreed the food was fine" is a restaurant review. The synthesis writes it as `RECOMMEND FOLLOW-UP` rather than as agreement.
**Output location:** appears as `RECOMMEND FOLLOW-UP` suffix on the per-dimension synthesis AND in "Follow-up questions if signal is thin".
## Rules NOT applied
These were considered and explicitly rejected:
- **"Average score below 3.0 → no-hire decision-point."** Rejected because it conflates an aggregation rule (the brief never aggregates to a hire/no-hire) with an escalation rule (the brief surfaces disagreements). The panel decides whether mean 2.8 means no-hire; the brief just shows that mean 2.8 is the score.
- **"More than X minutes of recorded silence in transcript → flag rapport issue."** Rejected because rapport interpretation from silence is exactly the kind of inference that surfaces protected-class proxies. Transcripts are used for evidence citation only, never for inferred-state analysis.
- **"Panel-tenure-weighted mean."** Rejected because weighting a senior interviewer's score above a junior one builds the seniority bias the bar-raiser role is supposed to neutralize. All scores are equal-weight in the aggregation; structural disagreements (R2, R3) are surfaced separately.
## When the rules conflict
If a single dimension triggers multiple rules (e.g. R1 AND R2 both fire on Systems design), the synthesis surfaces it as one decision-point with both triggers cited. The "Decision-points for the panel" entry names both ("range across panel of 2 points, including bar-raiser scoring 2 below panel mean").
If the brief has more than 5 decision-points, the brief surfaces all of them but adds a paragraph at the top of the section noting that the loop produced unusually high disagreement and the calibration of the rubric (or the loop composition) may itself be the discussion to have first.