ooligo
claude-skill

AI candidate sourcing with Claude

Difficulty
intermediate
Setup time
45min
For
recruiter · sourcer · talent-acquisition
Recruiting & TA

Stack

A Claude Skill that takes a job description plus a target ICP, builds an AI sourcing query against Juicebox or hireEZ, scores returned candidates against the rubric, and drafts personalized outreach for the top-N — all in a single conversation. Replaces the 3-hour Boolean-search-plus-scoring-plus-outreach workflow with a 30-minute review loop.

What you’ll need

  • Claude Code or Claude.ai with custom Skills enabled
  • API access to a sourcing platform — juicebox PeopleGPT, hireEZ, or LinkedIn Recruiter API
  • The role’s job description plus ICP rubric (target seniority, must-have skills, ideal-vs-acceptable companies)
  • Optional: ATS API access for direct write-back of sourced candidates to the recruiting CRM

Setup

  1. Drop the Skill. Place candidate-sourcing.skill into your Claude Code skills directory. The Skill exposes three callable functions: build_search_query, score_candidates, draft_outreach.
  2. Configure the sourcing API. Add your juicebox or hireEZ API key to the Skill’s config.yaml. The Skill includes example configs for both.
  3. Define your ICP rubric template. The rubric is the calibration input — what “good” looks like for the role. See structured interviewing for the rubric design pattern.
  4. Test on a known role. Run on a role you’ve already sourced manually. Compare the Skill’s top-20 to your manual top-20. Tune the rubric if the Skill’s calibration is off.

How it works

The Skill takes a role and an ICP rubric and:

  1. Builds the search query. Translates the natural-language ICP (“senior backend engineers in Berlin who’ve worked at fintech startups, with Go or Rust experience”) into the platform’s specific search format (juicebox PeopleGPT prompt, hireEZ Boolean, LinkedIn search criteria).
  2. Retrieves and scores candidates. Pulls the top 100 candidates from the platform; scores each 1-5 against the rubric dimensions (skill match, seniority fit, company-pattern fit, response-likelihood). Sorts by aggregate score.
  3. Drafts outreach for top-N. For the top 20-30 candidates, generates a personalized first-touch email pulling from the candidate’s actual background. Each email includes specific reference points (recent role, recent project, mutual connection if known).

Output

  • Scored candidate list as CSV: candidate name, current role, current company, aggregate score, dimension scores, source URL
  • Drafted outreach emails per top candidate: subject line, body, suggested follow-up cadence
  • Skipped candidates with reason (e.g., “current company is on do-not-poach list”, “previously rejected by us 6 months ago”)

Where it fits

Use this Skill at the top of the recruiting funnel — replacing the sourcer’s Boolean-search-plus-scoring-plus-outreach time. The sourcer’s daily workflow becomes: review the Skill’s output, edit the top emails, send. Throughput typically goes from 10-20 candidates per day to 50-100+ on the same time budget.

Watch-outs

  • Rubric quality is everything. A vague rubric produces noise. Codify what “good” looks like with explicit anchors per dimension.
  • Don’t auto-send outreach. AI-drafted outreach is the starting point; sourcer reviews, edits, sends. Auto-send produces volume without quality and damages candidate experience.
  • Sample-validate the scoring. Periodically check that the AI’s top-scored candidates match what the sourcer would have scored manually. Drift between the two reveals rubric or model issues.
  • Bias considerations. AI-augmented sourcing can amplify rather than reduce bias. Audit the candidate pool the Skill surfaces; verify it reflects the diversity of the qualified candidate pool, not just historical hiring patterns.
  • Compliance with EU AI Act + state hiring laws. Where the Skill is doing material screening (not just suggesting), bias-audit obligations may apply per EU AI Act and US state laws.