I've been rejected from jobs before.
Normal stuff. Bad timing. Not the right fit. Once, I think I just bombed the interview by confusing Java with JavaScript. (Narrator: He did not confuse them. He just forgot how to write either of them on a whiteboard.)
But now there's a new reason to get rejected: an AI decided you weren't worth a human's time.
And here's the kicker: that AI might be rejecting you because of your name, your zip code, or the year you graduated.
đď¸ NYC Says "Prove It" (Local Law 144)
In a move that terrified HR tech vendors everywhere, New York City passed Local Law 144.
It essentially says: "If you use AI to hire people in NYC, you must:"
- Audit the tool for bias annually.
- Publish the results publically.
- Notify candidates that an AI is judging them.
This creates a paper trail for what we all suspected: the algorithms are not alright.
đ The "Bias Amplification" Effect
We used to think: "Humans are biased. Machines are math. Math is objective. Therefore, machines remove bias."
System error. â
Machines learn from data. Data comes from history. History is... problematic.
The Amazon Example (The Warning Tale): Amazon built an AI recruiting tool trained on 10 years of resumes submitted to Amazon. Since the tech industry is male-dominated, most "successful" resumes were from men.
The AI learned: "Male = Good. Female = Bad."
It started downgrading resumes that contained the word "women's" (e.g., "captain of women's chess club"). It downgraded graduates of two all-women's colleges.
Amazon scrapped it. But how many other companies are using similar tools right now without knowing?
đ¤ How the Black Box Works
Most Resume Screening AIs work like this:
- Ingest: Takes your PDF/Word doc.
- Feature Extraction: Finds keywords (React, Python), schools, companies.
- Pattern Matching: Compares you to "top performers" currently at the company.
- Score: Assigns a 0-100 match score.
The Flaw: If your current "top performers" all went to Stanford and play lacrosse, the AI will prioritize candidates who went to Stanford and play lacrosse.
It doesn't optimize for "skill." It optimizes for "sameness."
It creates a monoculture at the speed of light.
đ¤ The "Objective" Lie
Here's what annoys me as an engineer.
Vendor sales decks say: "Eliminate unconscious bias!"
But AI doesn't eliminate bias. It codifies it. It takes unconscious bias and writes it into a JSON configuration file.
Once bias is in code, it scales.
- A racist human recruiter can reject 10 people a day.
- A racist algorithm can reject 10,000 people a minute.
That is not progress. That is efficient discrimination.
đĄď¸ The New Arms Race: AI vs. AI
So what happens when candidates realize the game is rigged? They cheat.
Enter the "Resume Optimization" industry.
Candidates are now using AI (ChatGPT, specialized tools) to rewrite their resumes to beat the AI screeners.
- "Add white text with keywords at the bottom!" (Don't do this, parsers catch it now.)
- "Mirror the job description exactly!"
- "Use 'Manager' instead of 'Lead' because the vector embedding prefers it!"
It's AI candidates applying to AI recruiters. Humans are just the biological substrate waiting for the meeting invite.
đŻ My Take
I'm glad NYC is forcing audits. Sunlight is the best disinfectant.
But audits are step one. The real question is: what happens when the audit finds bias?
Most companies faced with a "bias found" result have two choices:
- Fix the expensive tool (hard, maybe impossible without new data).
- Stop using the tool (expensive, slows down hiring).
- Publish the audit and hope nobody reads it.
I've seen the corporate world. I'd bet on option 3.
For job seekers: Optimizing your resume for robots isn't "gaming the system" anymore. It's survival. The robot is the gatekeeper. You have to speak its language before you can speak to a human.
And if you get rejected instantly? Don't take it personally. It wasn't a person. It was a regression model with a bad training set. đ