ooligo
cursor-rule

Cursor rules for recruiting engineers

Difficulty
intermediate
Setup time
15min
For
recruiting-engineer · recruiting-ops
Recruiting & TA

Stack

A Cursor .cursorrules file tuned for the work patterns of an in-house recruiting engineer (or a recruiting-ops manager who codes): building ATS integrations, writing MCP servers for recruiting tools, automating intake, and gluing Ashby, Greenhouse, and Lever into the rest of the recruiting stack. The artifact is one file — apps/web/public/artifacts/cursor-rules-recruiting-engineer/.cursorrules — that you drop into your project’s .cursor/rules/ directory.

The defining property of recruiting code is that every line touches data about real people who don’t know you exist. Audit logging, bias safety, jurisdictional compliance, and consent aren’t optional constraints — they’re the difference between a recruiting engineer and an enforcement action. The rules in this bundle encode that reality so Cursor’s AI assistant doesn’t suggest the kind of code that gets a firm fined or a candidate harmed.

When to use this

You’re a recruiting engineer or a recruiting-ops manager who writes integration code (Python, TypeScript) against an ATS, sourcing tool, or assessment vendor. Your team ships at least a few scripts per quarter that touch candidate data. Cursor is your IDE.

When NOT to use this

  • You don’t have a dedicated recruiting-engineering role and your “integrations” are vendor-installed Zaps. The rules assume an engineer is in the loop and authoring code; they won’t help a config-only setup.
  • You’re building an external recruiting product (an ATS, an assessment vendor, etc.). The rules are tuned for the consumer of recruiting APIs, not the builder of them. The compliance posture is different.
  • Your firm has zero candidates in the EU, California, NYC, or Illinois and is unlikely to ever have any. Several rules in the bundle reference NYC Local Law 144, Illinois AVDA, GDPR, and CCPA — they’re not harmful in other jurisdictions but they’re not load-bearing either.

Setup

  1. Copy the artifact. Grab .cursorrules from the bundle above (or download the zip) and drop it into your project’s .cursor/rules/ directory. Cursor’s Project Rules indicator should show the rules are active on next file open.
  2. Adjust the tool list. The rules cover Ashby, Greenhouse, Lever, Gem, hireEZ, and MCP-based recruiting tools by default. Trim or extend the tool-specific sections to match your stack — a team that uses only Ashby + Workable should delete the Greenhouse/Lever sections rather than carry dead guidance the model has to weigh.
  3. Fill in the audit destination. The rules require every read and write to produce an audit entry, but they don’t dictate where. Edit the “Audit trail” section to point at your log destination (Datadog, BigQuery, a custom audit table) so Cursor’s suggestions reference the real call.
  4. Set the secret manager. The rules ban inline credentials and direct the model toward your secret manager of choice. Pick one (1Password CLI, Doppler, AWS Secrets Manager, Vault) and edit the “Secrets and access” section so the model suggests the right call.
  5. Test on a representative task. Ask Cursor: “write a script that reads the last 100 Ashby applications, scores them against a JD, and posts the top 10 to a Slack channel.” The output should ask the right questions (which audit destination, which fields are PII, what’s the human-review fallback) before generating code. If it doesn’t, the rules aren’t loaded — check Cursor’s Project Rules indicator.

What the rules actually do

The bundle is structured as five layers, applied to every Cursor prompt in the workspace:

  1. A “before writing code, ask” preamble. Five questions Cursor surfaces to the user before generating: which candidate-data class is involved, which jurisdictions, read-or-write, retry semantics, audit destination. The rules instruct Cursor to refuse defaults and ask explicitly. This is the single highest-leverage section — it shifts conversations from “here is a plausible script” to “here is a clarifying question that surfaces the regulatory shape.”
  2. Tool-specific guidance for Ashby, Greenhouse, Lever, sourcing tools (Gem, hireEZ), and MCP servers. Each section names real endpoints, real rate limits, real header names, and the quirks that the vendor docs gloss over. Example: Greenhouse Harvest needs an On-Behalf-Of header for audit attribution; Cursor will suggest it now.
  3. Defaults to enforce across audit, bias/fairness, idempotence, schema validation, secrets, privacy, and testing. Each default is concrete. Audit logs include (timestamp, user_identity, system, action, data_scope). Auto-rejection requires a human-review fallback for borderline scores within 10% of the cutoff. Tests run against staging instances or vendor sandboxes; never production.
  4. Anti-patterns to refuse. Specific things the model rejects when the user requests them: inline credentials in demos, skipping audit “for the prototype,” logging full webhook payloads on receipt, building a scoring feature without referencing NYC LL 144 / EU AI Act first.
  5. A “when the user is wrong” section. The patterns engineers reach for under deadline pressure that the model should push back on rather than execute. The single most cost-saving rule: refuse to deploy AI screening to NYC-resident candidates without a documented annual bias audit, because NYC LL 144 makes this a per-candidate liability.

Cost reality

  • Token cost: zero. Cursor rules are local context shipped to the model on each prompt; they don’t add per-request charges beyond the 4-5 KB of context they occupy. You’ll lose 1-2% of the model’s effective context window. Worth it.
  • Setup time: ~15 minutes to drop the file and edit the audit destination + secret manager.
  • Per-task overhead: the “before writing code, ask” preamble adds 1-2 turns of dialogue before the model starts generating. For a 5-minute scripting task, this is meaningful overhead. For a 30-minute integration build, it’s noise — and the questions surface decisions that would otherwise be discovered in code review or, worse, in production.
  • Maintenance: ~1 hour per quarter to review the file. Tool versions drift; what was true about the Greenhouse Harvest API last quarter may be wrong this quarter. The artifact bundle ships as a starting point, not a frozen specification.

What success looks like

  • Code review comments about audit logging drop to near-zero. The rules suggest the audit calls inline, so reviewers stop catching their absence.
  • Zero “we forgot to handle the retry case” production incidents. Webhook handlers ship idempotent on first write because the rules enforce it.
  • Bias-audit conversations happen in design, not in legal review. Cursor surfaces the relevant law (NYC LL 144, Illinois AVDA, EU AI Act) when the user is generating screening code, so the discussion happens before the code is written.
  • Faster onboarding for new recruiting engineers. A new hire reads .cursor/rules/recruiting-engineer.md once and understands the team’s posture; they don’t need to absorb a quarter of code review feedback to learn the conventions.

Versus the alternatives

  • No rules at all (status quo). Cursor generates plausible recruiting code that passes review until it doesn’t. The first time a webhook handler isn’t idempotent and produces 1,200 duplicate candidate records, you’ll wish the rule existed.
  • A team coding-conventions doc that nobody reads. Functionally equivalent to no rules — the doc isn’t loaded into the AI’s context, so suggestions don’t reflect it. The Cursor rules file is the team conventions doc that’s actually loaded on every prompt.
  • A linter or pre-commit hook. Catches some patterns (hardcoded secrets, missing audit calls if you write a custom rule). Doesn’t shape the AI’s suggestions during writing — it only catches problems after the fact. The Cursor rules layer is upstream of the linter; both can coexist, and should.

Watch-outs

  • Rules require Cursor Project Rules support. Older Cursor versions don’t load .cursorrules from the project root. Verify on the Cursor version your team uses; the indicator at the bottom-right of the editor confirms rules are active. Guard: include a one-line check in your project README (“Cursor 0.40+; rules indicator must show ‘recruiting-engineer.md active’”).
  • Don’t over-specify. Adding rules for every style preference produces over-restrictive suggestions and conflicting directives. Keep the file focused on rules that prevent material bias, privacy, or candidate-data risk; let formatting drift handle itself with Prettier or Black. Guard: hard cap the file at ~300 lines.
  • Tool API drift. Ashby and Greenhouse ship breaking changes 1-2x per year. A rule that references a deprecated endpoint generates broken code. Guard: review the file quarterly against each tool’s changelog; version-tag the rule that mentions the API version (e.g. # Greenhouse Harvest v1 (verified 2026-Q2)).
  • Rules don’t replace bias audits or code review. They shape what Cursor suggests during writing. They do not run in CI, they do not catch what the engineer overrides, and they do not constitute a NYC LL 144 bias audit. Guard: keep the human review and audit infrastructure separate; the rules complement, never replace, them.
  • Per-repo overrides matter. A rule that’s right in your sourcing-automation repo may be wrong in your assessment-integration repo. Use Cursor’s per-directory rule support (.cursor/rules/<subdir>/ overrides) when the conventions actually diverge. Guard: prefer one shared rules file with documented exceptions over forking the file per repo.

Stack

  • Cursor — IDE and rules engine
  • .cursor/rules/recruiting-engineer.md — versioned in repo, code-reviewed like any other config
  • Secret manager of choice — referenced from the rules, never inlined
  • Audit destination — Datadog, BigQuery, or a dedicated audit table; named explicitly in the rules so suggestions point at the real call

Files in this artifact

Download all (.zip)