The death of shitty interviews has been greatly exaggerated.
AI might make e.g. your leetcode interview less predictive than it previously would have been. But was it predictive in the first place? I don't think most interviews are written by people thinking in those terms at all. If your method of interviewing never depended on data suggesting it actually, you know, worked in the first place, why would it matter if it starts working even worse?
Insofar as it makes the shittiness of those interviews more visible, the effect of AI is a good thing. An interview focused on recall of some specific algorithm was never predictive, it's just now predictive in a way that Generic Business Idiots can understand.
We frequently interview people who both (a) claim to have been in senior IC roles (not architect positions, roles where they are theoretically coding a lot) for many, many years and (b) cannot code their way out of a paper bag when presented with a problem that requires even a modicum of original reasoning. Some of that might be interview nerves, of course, but a lot of these people are not at all unconfident. They just...suck. And I wonder if what we're seeing is the downstream effects of Generic Business Idiots hiring primarily people who memorize stuff than people who build stuff.
> A lot of these people … just suck.
Another possibility is that their job subtly drifted.
I wrote a lot of code as a grad student but my first interviews afterward were disasters. Why? Because I’d spent the last few months writing my thesis and the few months before that writing a very specific kinds of code (signal processing, visualization) that were miles away from generic interview questions like “Make the longest palindrome.”