logoalt Hacker News

well_ackshuallylast Saturday at 4:05 PM3 repliesview on HN

Do you have any proof they don't? Do you have any proof the "AI System" that they use to filter out candidates doesn't "accidentally" access data ? Are you willing to bet that Google, OpenAI, Anthropic, Meta, won't sell access to that information?

Also, in some cases: they absolutely do. Try to get hired in Palantir and see how much they know about your browsing history. Anything related to national security or requiring clearances has you investigated.


Replies

linkregisterlast Saturday at 5:48 PM

The last time I went through the Palantir hiring process, the effort on their end was almost exclusively on technical and cultural fit interviews. My references told me they had not been contacted.

Calibrating your threat model against this attack is unlikely to give you any alpha in 2026. Hiring at tech companies and government is much less deliberate than your mental model supposes.

The current extent of background checks is an API call to Checkr. This is simply to control hiring costs.

As a heuristic, speculated information to build a threat model is unlikely to yield a helpful framework.

show 1 reply
raw_anon_1111last Saturday at 4:54 PM

As if any company that did that is a company I would want to work for.

For instance back when I was interviewing at startups and other companies where I was going to be a strategic hire, I would casually mention how much I enjoyed spending time on my hobbies and with my family on the weekend so companies wouldn’t even extend an offer if they wanted someone “passionate” who would work 60 hours a week and be on call.

show 1 reply
ffsm8last Saturday at 4:23 PM

[flagged]