Automatinganin-houserecipetopickpromisingcandidatesamongstapplicants
Four hundred applications come in for a senior product role over ten days. The agent scores every submission against the job spec, the team's rubric, and the compensation band. Strong matches get a two-line summary of what stands out. Clear mismatches get a draft decline citing the specific gap — wrong seniority level, outside the comp range, missing a required domain. Career changers and non-traditional backgrounds get flagged for recruiter review with a note on what's strong and what's uncertain. Every application is scored on the same rubric, regardless of format or background. The hiring manager starts from a ranked shortlist. Every applicant receives a message, which is rare enough to strengthen the business's reputation as a desirable employer.
Key Takeaways
Shortlist in hours
300 applications screened and ranked in the time it would take a recruiter to read 30.
Criteria-first evaluation
Every candidate is scored against the same requirements in the job description — not against the previous candidate.
Structured notes, not gut feel
Each shortlisted candidate comes with requirements met, requirements partial (with the analogy explained), and requirements missing.
Non-obvious fits surfaced
Candidates with transferable skills who don't match the exact keyword pattern are evaluated on substance, not filtered out on phrasing.
Consistent across the pool
The same criteria applied to every applicant, with no variation based on reviewer or time of day.
Talent acquisition teams review hundreds of applications for every role. The screening process is linear, inconsistent, and slow. The fifth résumé reviewed gets a more patient read than the fiftieth. Candidates who don't match keywords get filtered before a human ever assesses whether their background is actually relevant.
The automation reads the job description and extracts the discrete requirements: must-have experience, preferred skills, and disqualifying conditions. It reads every résumé in the applicant pool and scores each candidate against the requirement set. It notes where requirements are met directly, where they're met by analogy (a different title for the same function), and where they're absent.
The shortlist is ranked by match quality, with a structured evaluation note for each candidate. The recruiter reads 15 notes instead of 300 résumés, and the notes are comparable — same format, same criteria, same structure. Candidates who stood out for non-obvious reasons are flagged with a brief explanation.
One signature per candidate
A single DSPy signature evaluates each résumé against the extracted requirement set and returns a match score, requirements met, partial matches with the analogy explained, and a structured evaluation note. The same logic runs against every applicant in the pool.
Built on predict-rlm — open source. github.com/Trampoline-AI/predict-rlm