logoalt Hacker News

tavavexyesterday at 4:17 PM7 repliesview on HN

The year is 2032. One of the big tech giants has introduced Employ AI, the premiere AI tool for combating fraud and helping recruiters sift through thousands of job applications. It is now used in over 70% of HR departments, for nearly all salaried positions, from senior developers to minimum wage workers.

You apply for a job, using your standardized Employ resume that you filled out. It comes bundled with your Employ ID, issued by the company to keep track of which applications have been submitted by specifically you.

When Employ AI does its internet background check on you, it discovers an article about a horrific attack. Seven dead, twenty-six injured. The article lists no name for the suspect, but it does have an expert chime in, one that happens to share their last name with you. Your first name also happens to pop up somewhere in the article.

With complete confidence that this is about you, Employ AI adds the article to its reference list. It condenses everything into a one-line summary: "Applicant is a murderer, unlikely to promote team values and social cohesion. Qualifications include..." After looking at your summary for 0.65 seconds, the recruiter rejects your application. Thanks to your Employ ID, this article has now been stapled to every application you'll ever submit through the system.

You've been nearly blacklisted from working. For some reason, all of your applications never go past the initial screening. You can't even know about the existence of the article, no one will tell you this information. And even if you find out, what are you going to do about it? The company will never hear your pleas, they are too big to ever care about someone like you, they are not in the business of making exceptions. And legally speaking, it's technically not the software making final screening decisions, and it does say its summaries are experimental and might be inaccurate in 8pt light gray text on a white background. You are an acceptable loss, as statistically <1% of applicants find themselves in this situation.


Replies

dakial1yesterday at 4:32 PM

Well, there is always the option to create regulation where if the employer uses AI to summarize outputs, it must share with you the full content of the report before the recruiter sees it so that you can point out any inconsistency/error or better explain some passages.

show 1 reply
chabesyesterday at 4:19 PM

They didn’t even share names in the case of the OP

FourteenthTimeyesterday at 4:34 PM

This is the most likely scenario. Half-baked AIs used by all tech giants and tech subgiants will make a mess of our identities. There needs to be a way for us to review and approve information about ourselves that goes into permanent records. In the 80's you send a resume to a company, they keep it on file. It might be BS but I attested to it. Maybe... ugh I can't believe I"m saying this: blockchain?

xatttyesterday at 4:28 PM

I roll 12. Employ AI shudders as my words echo through its memory banks: “Record flagged as disputed”. A faint glow surrounds my Employ ID profile. It is not complete absolution, but a path forward. The world may still mistrust me, but the truth can still be reclaimed.

tantaloryesterday at 4:24 PM

What's to stop you from running the same check on yourself, so you can see what the employers are seeing?

If anything this scenario makes the hiring process more transparent.

show 4 replies
buyucuyesterday at 4:38 PM

I think 2032 is unrealistic. I expect this to happen 2027 latest.

bell-cotyesterday at 4:29 PM

IANAL...but at scale, this might make some libel lawyers rather rich.

show 3 replies