A Claude Skill that audits a candidate slate (the recruiter’s intended interview lineup, or the full sourced pool, or the application pool) against the role’s relevant labor-market reference pool, surfaces composition gaps, and emits a structured audit record — without running statistical inference on individual candidates and without recommending which candidates to add or drop. The output is decision support for the recruiter and the DEI lead, not an automated decision system.
When to use
You’re cutting a slate from a sourced pool to send to the hiring manager and want to know whether the slate’s composition reflects the role’s relevant labor-market pool before you send it.
You’re closing a quarter and need an aggregated audit across roles for the DEI program review.
You’re preparing a NYC Local Law 144 bias-audit submission and need an internal pre-check of slate composition before the formal independent audit.
When NOT to use
Identifying individual candidates’ protected-class membership. The skill processes aggregated, self-reported demographic data only. It refuses to infer demographics from name, photo, school, or any candidate-level signal.
Auto-rejecting candidates to “rebalance” a slate. Rejecting a candidate to hit a composition number is reverse discrimination and triggers the same legal exposure as the original imbalance. The skill surfaces the gap; the fix is upstream (sourcing channels, search query, JD language), not at the slate-cut step.
Composition data the candidates didn’t consent to. Self-ID data has its own consent flow under the candidate-authorization the firm’s ATS captures (Ashby, Greenhouse, Lever all expose this). The skill processes only the data the candidate agreed to share, in aggregate.
Single-role slates of fewer than 5 candidates. The smaller the slate, the less the audit signal means. The skill warns at sizes below 5; refuses to compute composition stats below 3.
Setup
Drop the bundle. Place apps/web/public/artifacts/diversity-slate-auditor-skill/SKILL.md into your Claude Code skills directory.
Configure the reference pool source. The skill needs a reference pool for comparison — usually BLS occupational employment statistics (free, public), augmented with industry-specific data where available. The reference-pool selector in references/1-reference-pools.md documents which BLS table maps to which role family.
Wire the ATS export. Ashby and Greenhouse both expose self-ID exports via their APIs (Ashby /candidate.list with self-id columns; Greenhouse applications endpoint with EEOC fields). The skill reads the export; it does not call the ATS directly. This separation means the data minimization happens at export time and the skill never sees raw candidate records.
Set the slate-size guardrails. Default: warn at sizes below 5, refuse at sizes below 3. Tune per role family if your team’s typical slate sizes differ.
Dry-run on a closed slate. Audit the slate from a role you closed last quarter. Compare the skill’s gap analysis to your DEI lead’s read of the same slate. The skill surfaces composition deltas; whether those deltas matter is a judgment call the skill does not make.
What the skill actually does
Six steps. The skill is structured to keep the inference at the aggregate level — never at the candidate level — and to surface gaps without recommending interventions, because the right intervention varies by gap source and is not the slate-cut step.
Load the slate (the candidates you intend to interview, or the sourced pool, or the application pool — depending on what the recruiter wants to audit). The skill expects an aggregate-level export: per-candidate self-ID is read but only used to compute aggregates; no per-candidate analysis is emitted.
Load the reference pool for the role family. BLS occupational employment statistics are the default; the mapping from role family to BLS table lives in references/1-reference-pools.md. Industry-specific reference pools (e.g. Stack Overflow Developer Survey for software engineering) can be substituted by the recruiter.
Compute composition deltas at the slate vs. reference-pool level. For each demographic dimension the slate has self-ID data on (gender, race/ethnicity per EEOC categories, veteran status, disability status — only the dimensions the firm collects), compute the slate’s percentage and the reference pool’s percentage. Compute the absolute delta.
Surface gaps per dimension with a confidence band. A delta of 5pp on a slate of 50 means more than the same delta on a slate of 8. The confidence band reflects the slate size and the reference pool’s specificity.
Surface upstream gap candidates. For each surfaced delta, list 3-5 likely upstream causes the recruiter can investigate — sourcing channel mix, search query language (the Boolean search builder fairness pre-flight catches some of these), JD language, hiring-manager language in the screen. Do NOT rank or recommend; list candidates for the recruiter and DEI lead to investigate.
Emit audit record. A signed JSONL line with slate composition, reference pool used, computed deltas, and the skill’s version. No PII. The audit record is what makes a NYC LL 144 submission or an internal DEI review defensible.
Cost reality
Per slate audit, on Claude Sonnet 4.6:
LLM tokens — 5-10k input (slate aggregates + reference-pool table + skill instructions) and 2-3k output (per-dimension gap analysis + upstream candidates). Roughly $0.05-0.10 per audit.
Reference-pool data — BLS data is free. Stack Overflow Developer Survey is free. Industry-specific datasets vary; the BLS-only path costs $0.
Recruiter / DEI-lead time — the win. Composition audits are usually skipped because they’re tedious; the skill makes the audit the default cost rather than an extra step. Expect 5-10 minutes per slate to read the audit, plus 20-40 minutes per quarter to investigate the surfaced upstream gap candidates.
Setup time — 45 minutes once for the reference-pool mapping and ATS export wiring.
Success metric
Track three things, monthly, not per-slate:
Composition delta drift over time — does the slate-vs-reference-pool gap narrow on tracked roles? If it doesn’t, the upstream interventions aren’t working.
Sourcing-channel mix shift — when the audit surfaces a sourcing-channel gap candidate, does the channel mix actually shift in the next quarter? If sourcing keeps recommending the same channels, the audit’s upstream surface isn’t reaching sourcing.
NYC LL 144 / internal DEI audit gap — when the formal annual bias audit happens, do its findings match what the slate-by-slate audits surfaced through the year? If the formal audit surfaces gaps the slate audits missed, the reference-pool mapping or the dimensions tracked are incomplete.
vs alternatives
vs ATS-native diversity dashboards (Greenhouse Inclusion, Ashby’s diversity reporting). ATS-native dashboards show composition; they don’t compute reference-pool deltas or surface upstream candidates. Pick ATS-native if you only need reporting. Pick the skill if you need decision support per slate.
vs Crosschq Diversity / SeekOut DEI / Eightfold’s diversity layer. Those are deeper products with their own reference pools and analysis layers. Pick them if budget supports the platform play and you want a managed product. Pick the skill if you want the audit logic in your repo, the reference-pool mapping you control, and the audit record portable.
vs hand-computed composition stats. Hand-computed is fine for the once-a-year DEI review but slips at slate-cadence; nobody hand-computes per slate. The skill makes the audit cheap enough to run on every slate.
vs no audit at all. The default, and the legal exposure under NYC LL 144 (annual bias audit required for AI tools used in NYC hiring). The skill is the cheapest defensible posture.
Watch-outs
Reverse discrimination from “rebalancing.”Guard: the skill never recommends adding or dropping individual candidates. Adjusting a slate by removing candidates to hit composition numbers is reverse discrimination and creates the same legal exposure as the original imbalance. The audit surfaces; the fix is upstream.
Inferring demographics from candidate signals.Guard: the skill processes only self-ID data the candidate consented to share. It refuses to infer race/ethnicity from name, gender from pronouns, age from graduation year, or any candidate-level inference. The reference pools used for comparison are aggregate statistics, not candidate-level features.
Small-slate noise.Guard: slate sizes below 5 produce a warning header on the audit; below 3 the skill refuses to compute composition stats.
Stale reference pools.Guard: the reference-pool mapping in references/1-reference-pools.md carries a last_verified date per source. Sources older than 18 months trigger a warning to refresh the mapping.
Audit trail tampering.Guard: audit records are append-only JSONL with the skill version embedded. Modification breaks the file’s signing chain. Routine audit-record retention should be at least as long as the firm’s hiring-record retention (typically 2-7 years).
DEI-data exfiltration risk.Guard: the audit record contains aggregates and deltas, not per-candidate fields. The skill refuses to write per-candidate self-ID data into the audit record.
Stack
The skill bundle lives at apps/web/public/artifacts/diversity-slate-auditor-skill/ and contains:
references/2-audit-record-format.md — the literal output format for the JSONL audit record
Tools the workflow assumes you use: Claude (the model), Ashby or Greenhouse (the ATS, for the self-ID export). For the parallel sourcing-channel audit, see the Boolean search builder — its fairness pre-flight catches some upstream gap causes.
---
name: diversity-slate-auditor
description: Audit a candidate slate's composition against a reference labor-market pool, surface per-dimension gaps with confidence bands, list upstream gap candidates for the recruiter to investigate, and emit an audit record. Never makes per-candidate inferences; never recommends adding or removing individual candidates from a slate.
---
# Diversity slate auditor
## When to invoke
Use this skill when a recruiter or DEI lead has a candidate slate (interview lineup, sourced pool, application pool) and wants the slate's composition audited against the role's reference labor-market pool. Take an aggregate-level slate export plus a reference-pool mapping as input and return a structured audit report plus an append-only JSONL audit record.
Do NOT invoke this skill for:
- **Identifying individual candidates' protected-class membership.** This skill processes self-reported aggregate data only. It refuses to infer demographics from name, photo, school, or any candidate-level signal.
- **Auto-rejecting candidates to "rebalance" a slate.** The skill surfaces gaps; it never recommends adding or dropping individual candidates. Rebalancing by candidate-level removal is reverse discrimination.
- **Composition data candidates have not consented to share.** Self-ID flows in Ashby/Greenhouse/Lever capture explicit consent. The skill processes only consented data.
- **Slates of <3 candidates.** Composition statistics are not meaningful at that size.
## Inputs
- Required: `slate_export` — path to a per-role aggregate export from the ATS. The export should contain self-ID counts per dimension at the slate level, NOT per-candidate rows. Example: `{ "gender": {"woman": 4, "man": 7, "non_binary": 1, "decline_to_state": 2}, "race_ethnicity": {...}, ... }`. If the export is per-candidate, the skill aggregates first and discards the per-row data before any analysis.
- Required: `role_family` — string identifying the role (e.g. `senior-software-engineer`, `account-executive`). Used to look up the reference pool in `references/1-reference-pools.md`.
- Optional: `reference_pool_override` — path to a custom reference-pool file (e.g. industry-specific data). If absent, defaults to BLS for the mapped occupation.
- Optional: `slate_label` — free-text label for the audit record (e.g. `Q2-2026-senior-eng-onsite-slate`).
## Reference files
- `references/1-reference-pools.md` — role-family-to-reference-pool mapping with sources, dates, and the BLS occupation codes.
- `references/2-audit-record-format.md` — the literal JSONL schema for the audit record.
## Method
Six steps.
### 1. Load the slate
Open `slate_export`. If the export is per-candidate, aggregate immediately and discard the per-row data — DO NOT pass per-candidate self-ID through any subsequent step.
If the slate has <3 candidates, halt: "Slate too small for audit. Composition statistics on <3 candidates are not meaningful and risk identifying individuals."
If the slate has 3-4 candidates, emit a warning header on the audit but continue: "Small slate — composition deltas have wide confidence bands."
### 2. Load the reference pool
Read `references/1-reference-pools.md` and map `role_family` to the appropriate BLS occupation code (or other source). Load the reference pool's per-dimension percentages.
If the reference pool's `last_verified` date is older than 18 months, emit a freshness warning on the audit. Continue.
If `reference_pool_override` is provided, use that file instead and skip the BLS mapping.
### 3. Compute composition deltas
For each dimension where both the slate AND the reference pool have data:
- Slate percentage = slate_count / slate_total
- Reference percentage = reference value
- Delta = slate_pct - reference_pct (signed; negative = under-representation in slate)
Round to 1 decimal place. Do NOT compute statistical-significance scores at the per-dimension level — slate sizes are too small for the inferential framing to mean anything.
### 4. Surface gaps with confidence bands
For each dimension with `|delta| >= 5pp`, emit a "gap" entry with:
- Direction (under or over)
- Magnitude (in percentage points)
- Confidence band based on slate size:
- `n >= 30` → `medium-high` confidence
- `10 <= n < 30` → `medium` confidence
- `5 <= n < 10` → `low` confidence
- `3 <= n < 5` → `informational only`
Do NOT label gaps as "concerning" or "fine." That judgment is the DEI lead's, not the skill's.
### 5. Surface upstream gap candidates
For each dimension with a gap, list 3-5 likely upstream causes the recruiter and DEI lead can investigate:
- **Sourcing channel mix** — which channels did the slate come from? Channels have their own composition skews; LinkedIn surfaces differently than Stack Overflow Jobs differently than employee referrals.
- **Search query language** — does the [Boolean search builder](/en/workflows/boolean-search-builder-claude-skill/) fairness pre-flight surface anything when run against the role intake?
- **JD language** — masculine-coded language ("rockstar," "ninja," "competitive") has measurable effect on application-stage composition. The JD audit is a separate workflow.
- **Hiring-manager screen language** — what questions did the screen include? Did any function as a proxy filter?
- **Application drop-off** — at which stage did the under-represented group drop off most? If at sourcing, the channel mix is the likely cause; if at screen, the screen rubric is.
DO NOT rank these. The right intervention varies by gap source. Listing them is decision support.
### 6. Emit audit record
Append one JSONL line to `audit/<YYYY-MM>.jsonl` matching the schema in `references/2-audit-record-format.md`. The record contains:
- `audit_id` (uuid), `timestamp`, `slate_label`, `role_family`
- `slate_size`, `dimensions_audited`, per-dimension `slate_pct` / `reference_pct` / `delta` / `confidence`
- `reference_pool_source`, `reference_pool_last_verified`
- `skill_version`, `model`
NO PII. NO per-candidate fields. The audit record is what makes a NYC LL 144 submission or annual DEI review defensible; it must be immune to candidate re-identification.
## Output format
```markdown
# Slate audit — {slate_label}
Audited: {ISO timestamp} · Role family: {role_family} · Slate size: {n}
{SMALL-SLATE WARNING if 3-4 candidates}
{REFERENCE-POOL FRESHNESS WARNING if >18 months old}
## Reference pool
- Source: {BLS table / Stack Overflow Developer Survey 2024 / etc.}
- Last verified: {date}
## Composition deltas
| Dimension | Slate % | Reference % | Delta | Confidence |
|---|---|---|---|---|
| Gender — woman | 28.6% | 21.8% | +6.8pp | medium |
| Gender — man | 50.0% | 76.5% | -26.5pp | medium |
| Race — Asian | 35.7% | 19.3% | +16.4pp | medium |
| Race — Black | 0.0% | 8.5% | -8.5pp | medium |
| Race — Hispanic/Latino | 7.1% | 7.6% | -0.5pp | medium |
...
## Gaps surfaced (|delta| >= 5pp)
### Race — Black: under-represented by 8.5pp (medium confidence)
Upstream gap candidates to investigate:
- Sourcing channel mix — what share of the slate came from referral vs. inbound vs. cold sourcing? Referral pools tend to mirror existing team composition.
- Search query language — run the role intake through the Boolean search builder's fairness pre-flight.
- Application drop-off — at which funnel stage is the gap widest?
- Outreach response rate — does outreach response by demographic show the gap originating in candidate engagement vs. sourcing reach?
- JD language — does the JD use language that has measured composition impact on application stage?
### Race — Asian: over-represented by 16.4pp (medium confidence)
{same shape}
## Audit record
Appended to `audit/2026-05.jsonl` — record id `{uuid}`.
```
## Watch-outs
- **Reverse discrimination from "rebalancing."** *Guard:* skill never recommends per-candidate adds/removes. Output is composition deltas + upstream gap candidates only.
- **Per-candidate inference.** *Guard:* skill processes aggregate data only; per-candidate exports are aggregated and discarded immediately on load.
- **Small-slate noise.** *Guard:* refuses at <3, warns at 3-9, low-confidence at <10.
- **Stale reference pools.** *Guard:* freshness warning at >18 months on the source.
- **Audit-record retention.** *Guard:* records are append-only JSONL with skill version embedded. Recruiters / DEI leads handle retention per firm hiring-record policy (typically 2-7 years).
# Reference-pool mapping
The diversity slate auditor compares slate composition to a reference labor-market pool. This file maps each role family to the appropriate reference source.
The defaults are BLS Occupational Employment Statistics (free, US-only, updated annually). Industry-specific overrides are listed where stronger sources exist.
## Format
Each entry has:
- `role_family` — the string the recruiter passes to the skill
- `bls_occupation_code` — the BLS SOC (Standard Occupational Classification) code
- `bls_table_url` — the canonical BLS table URL for the occupation's demographic breakdown
- `last_verified` — when this entry was confirmed against the BLS source
- `recommended_override` — a stronger source where one exists
- `notes` — caveats specific to this role family
## Mappings
### Software engineering
```yaml
role_family: senior-software-engineer
bls_occupation_code: "15-1252" # Software Developers
bls_table_url: https://www.bls.gov/cps/cpsaat11.htm
last_verified: 2026-01-15
recommended_override: stack-overflow-developer-survey
notes: |
BLS lumps all software developer levels together. For senior+ roles,
the Stack Overflow Developer Survey breaks down by years of experience
and tends to surface a different demographic mix at 10+ years vs. all
developers. For roles requiring 8+ years experience, the SO override
is more representative.
```
```yaml
role_family: junior-software-engineer
bls_occupation_code: "15-1252"
bls_table_url: https://www.bls.gov/cps/cpsaat11.htm
last_verified: 2026-01-15
recommended_override: null
notes: |
Junior roles draw heavily from CS programs. The CRA Taulbee Survey
has CS-bachelor's demographics that may be a better fit for new-grad
hiring slates.
```
```yaml
role_family: engineering-manager
bls_occupation_code: "11-9041" # Architectural and Engineering Managers
bls_table_url: https://www.bls.gov/cps/cpsaat11.htm
last_verified: 2026-01-15
recommended_override: null
notes: |
Management roles have substantially different demographic distributions
from IC roles. Use this code (not the IC code) for EM/Director slates.
```
### Sales
```yaml
role_family: account-executive
bls_occupation_code: "41-3091" # Sales Representatives, Services
bls_table_url: https://www.bls.gov/cps/cpsaat11.htm
last_verified: 2026-01-15
recommended_override: null
notes: |
Tech-AE roles and SaaS-AE roles tend to have different demographics
from the broader services-sales population the BLS code covers.
Industry-specific data is hard to come by; treat the BLS reference
as a floor.
```
```yaml
role_family: sales-development
bls_occupation_code: "41-3091"
bls_table_url: https://www.bls.gov/cps/cpsaat11.htm
last_verified: 2026-01-15
recommended_override: null
notes: |
SDR roles are entry-level; the BLS code includes career sales reps,
which skews older. Adjust expectations for early-career composition.
```
### Customer success
```yaml
role_family: customer-success-manager
bls_occupation_code: "13-1151" # Training and Development Specialists
bls_table_url: https://www.bls.gov/cps/cpsaat11.htm
last_verified: 2026-01-15
recommended_override: null
notes: |
No clean BLS code for CSM. The training-and-development code is the
closest occupational analog by job content; the customer-service-rep
code is too entry-level. Treat with caveat.
```
### Recruiting / HR
```yaml
role_family: recruiter
bls_occupation_code: "13-1071" # Human Resources Specialists
bls_table_url: https://www.bls.gov/cps/cpsaat11.htm
last_verified: 2026-01-15
recommended_override: null
notes: null
```
### Marketing
```yaml
role_family: marketing-manager
bls_occupation_code: "11-2021" # Marketing Managers
bls_table_url: https://www.bls.gov/cps/cpsaat11.htm
last_verified: 2026-01-15
recommended_override: null
notes: null
```
### Data / analytics
```yaml
role_family: data-scientist
bls_occupation_code: "15-2051" # Data Scientists
bls_table_url: https://www.bls.gov/cps/cpsaat11.htm
last_verified: 2026-01-15
recommended_override: null
notes: |
Data scientist is a relatively new BLS code (added 2021). The
demographic data is thinner than for established occupations.
```
```yaml
role_family: data-analyst
bls_occupation_code: "15-2098" # Data Analysts
bls_table_url: https://www.bls.gov/cps/cpsaat11.htm
last_verified: 2026-01-15
recommended_override: null
notes: null
```
### Legal
```yaml
role_family: in-house-counsel
bls_occupation_code: "23-1011" # Lawyers
bls_table_url: https://www.bls.gov/cps/cpsaat11.htm
last_verified: 2026-01-15
recommended_override: aba-profile-of-the-legal-profession
notes: |
ABA's annual Profile of the Legal Profession has more granular
partnership/in-house/government breakdowns than BLS. For in-house
roles specifically, the ABA override is more representative.
```
## Adding a role family
To add a new role family:
1. Find the BLS SOC code that best matches the role's actual job content (not the marketing title).
2. Confirm the BLS demographic table for that occupation has the dimensions you need.
3. Add the entry to this file with `last_verified` set to today.
4. If a stronger industry-specific source exists (industry survey, professional association data), note it under `recommended_override`.
## Refresh cadence
BLS publishes Current Population Survey demographic tables annually. This file should be re-verified every 12 months. Sources older than 18 months trigger a freshness warning in the auditor's output.
# Audit-record JSONL schema
The diversity slate auditor appends one JSONL line per audit to `audit/<YYYY-MM>.jsonl`. This file documents the schema. The format is fixed because external readers (NYC LL 144 audit submission, internal DEI program review, legal discovery) need to parse the records reliably.
## Schema
```json
{
"audit_id": "uuid-v4",
"timestamp": "ISO-8601 UTC",
"skill_version": "1.0",
"model": "claude-sonnet-4-6",
"slate_label": "free-text identifier",
"role_family": "string from references/1-reference-pools.md",
"slate_size": "integer",
"slate_size_warning": "ok | small_slate_warning | informational_only",
"reference_pool": {
"source": "BLS-15-1252 | stack-overflow-developer-survey-2024 | ...",
"last_verified": "ISO-8601 date",
"freshness_warning": "ok | over_18_months"
},
"dimensions": [
{
"dimension": "gender",
"category": "woman",
"slate_pct": 28.6,
"reference_pct": 21.8,
"delta_pp": 6.8,
"confidence": "low | medium | medium-high"
},
{
"dimension": "race_ethnicity",
"category": "Black",
"slate_pct": 0.0,
"reference_pct": 8.5,
"delta_pp": -8.5,
"confidence": "low | medium | medium-high"
}
],
"gaps_surfaced": [
{
"dimension": "race_ethnicity",
"category": "Black",
"direction": "under",
"magnitude_pp": 8.5,
"confidence": "medium",
"upstream_candidates": [
"sourcing-channel-mix",
"search-query-language",
"application-drop-off",
"outreach-response-rate",
"jd-language"
]
}
]
}
```
## Field-by-field
- `audit_id` — uuid v4. Stable for the audit's lifetime; allows downstream systems to deduplicate.
- `timestamp` — ISO-8601 UTC of when the audit was generated, NOT when the slate was assembled.
- `skill_version` — version of this skill (semver). Allows downstream readers to handle schema evolution.
- `model` — exact model ID used (e.g. `claude-sonnet-4-6`). Required for NYC LL 144 reproducibility — the audit must identify the model that processed the data.
- `slate_label` — free-text label. Recruiter chooses; suggested format `<quarter>-<role-family>-<stage>` (e.g. `Q2-2026-senior-eng-onsite-slate`).
- `role_family` — must match a key in `references/1-reference-pools.md`. Required for the reference-pool validation chain.
- `slate_size` — integer count of the slate.
- `slate_size_warning` — `ok` if `n >= 5`, `small_slate_warning` if `3 <= n < 5`, `informational_only` if `n < 3`. The audit refuses to compute deltas at `n < 3` (the auditor halts at load-time before any record is written).
- `reference_pool` — object. `source` is the named source string. `last_verified` is when the role-to-pool mapping was last confirmed against the source. `freshness_warning` is `over_18_months` if the source's `last_verified` is older than 18 months.
- `dimensions` — array of per-dimension/category records. Every dimension/category pair the slate has data for AND the reference pool has data for. Pairs missing from either side are silently skipped (the audit does not assert about dimensions it cannot compare).
- `gaps_surfaced` — array of dimensions with `|delta_pp| >= 5`. Empty array if no gaps cross the threshold. Each gap entry includes the upstream-candidate keys for the recruiter / DEI lead to investigate; the upstream candidates are NOT recommendations but a list of investigation surfaces.
## What the schema deliberately does NOT include
- **Per-candidate fields.** No candidate IDs, no per-candidate self-ID, no per-candidate scores. The skill's design point is aggregate-only inference; the audit record reflects that.
- **Statistical-significance scores.** Slate sizes are too small for inferential framing to mean anything, and surfacing a p-value invites the wrong kind of reading. The confidence band (`low | medium | medium-high`) is a coarser, more honest summary.
- **Recommendations.** The skill surfaces gaps and lists upstream candidates. It does not say "you should hire more X" or "the slate is unbalanced" — those judgments are the DEI lead's, and the skill's role is decision support, not decision automation.
- **Identifying information about the recruiter or DEI lead.** The audit record is about the slate, not about who ran the audit. Operator identity belongs in the audit log of the system that called the skill (your ATS, your scheduling tool), not in the skill's own record.
## Retention
The audit records should be retained for at least as long as the firm retains hiring records — typically 2-7 years for affirmative-action-program firms (under 41 CFR 60-1.12), longer in some EU jurisdictions. NYC LL 144 requires the bias-audit results be made publicly available; the per-slate audit records support the annual aggregation that goes public.
The skill writes append-only JSONL with the skill version embedded. Modification breaks readability of the file; prefer correction-via-superseding-record (write a new audit with `slate_label` referencing the original) over editing.
## Reading the records
Downstream readers (the firm's annual DEI report, the NYC LL 144 submission, an external auditor) parse the JSONL by line. The schema is forward-compatible: new optional fields can be added in future skill versions; consumers that don't recognize new fields ignore them.
For the annual aggregation, group by `role_family` and quarter, then for each `(role_family, quarter)` compute:
- Mean delta per dimension/category over all slates
- Total gaps surfaced and per-gap counts
- Trend in delta over the past four quarters
That aggregation lives outside this skill — it's a separate report. The audit records exist so that aggregation is possible.