Anyone who suggests a paired programming solution is right, and answering the wrong question. Unless/until we return to a covid-like market the process will never be optimized for the candidate, and this is just too expensive an approach for employers. In this market I think the answer is hire less.
If you are using Internet, Google, Stackoverflow at work, why insist that interviewee need to solve problems on their own?
Or you can just given them a way to bypass all of that, and ask them about any significant project that the candidate did build (which is relevant to the job description, open or closed source that is released) or even open source contributions towards widely used and significant projects. (Not hello world, or demo projects, or README changes.)
Both scenarios are easily verifiable (can check that you released the project or if you made that commit or not) and in the case of open-source, the interviewer can lookup at how you code-review with others, and how you respond and reason about the code review comments of others all in public to see if you actually understand the patches you or another person submitted.
A conversation can be started around it and eliminates 95% of frauds. If the candidate cannot answer this, then no choice but give a leetcode / hackerrank hard challenge and interview them again to explain their solution and why.
A net positive to everyone and all it takes to qualify is to build something that you can point to or contribute to a significant open source project. Unlike Hackerrank which has now become a negative sum race to the bottom quest with rampant cheating thanks to LLMs.
After that, a simple whiteboard challenge and that is it.
So, when AI can pass the tech interview seamlessly, I guess we can just hire it?
Maybe the future will be human shills pretending to be job candidates for shady AI “employment agencies” that are actually just (literally) skinning gpt6 apis that sockpuppet minimum wage developing nation”hosts”?
Show the remote candidate an AI's deficient answer to a well-asked question, and ask the candidate if they understand what exactly is wrong with the AI's assessment, or what the follow-up/rewritten prompt to the AI should be. Compile a library of such deficient chats with the AI.
I've been considering using a second webcam stream focused on my screen just to assure hiring managers that I don't have ChatGPT on my screen, or anywhere else. Kind of like chess players do it sometimes on online tournaments. I've been hearing people complain about cheating a lot.
If using AI is cheating then one solution as the author mentions is have the interview take place at an office but I'm surprised another approach isn't more readily available: having the candidate the the test remotely at a trusted 3rd party location.
We did an experiment at interviewing.io a few months ago where we asked interviewees to try to cheat with AI, unbeknownst to their interviewers.
In parallel, we asked interviewers to use one of 3 question types: verbatim LeetCode questions, slightly modified LeetCode questions, and completely custom questions.
The full writeup is here: https://interviewing.io/blog/how-hard-is-it-to-cheat-with-ch...
TL;DR:
- Interviewers couldn't tell when candidates were cheating at all
- Both verbatim and slightly modified LeetCode questions were really easy to game with AI
- Custom questions were not gamable, on the other hand[1]
So, at least for now, my advice is that companies put more effort into coming up with questions that are unique to them. It's better for candidates because they get better signal about the work, it reduces the value asymmetry (companies have to put effort into their process instead of just grabbing questions from LeetCode etc), and it's better for employers (higher signal from the interview).
[1] This may change with the advent of better models
Don't forget to wear your cowboy hat when interviewing at google. Very important.
Why don't we simply ask the AI how to conduct a tech interview nowadays?
I've been interviewing a bunch of developers the past year or so, and this:
> Architectural interviews are likely safe for a few years yet. From talking to people who have run these, it’s evident that someone is using AI. They often stop with long pauses, do not quite explain things succinctly, and do not understand the questions well enough to prompt the correct answer. As AI gets better (and faster), this will likely follow the same fate as the rest but I would give it some years yet.
Completely matches my experience. I don't do leet code BS, just "let's have a talk". I ask you questions about things you tell me you know about, and things I expect of someone at the level you're selling yourself at. The longest it's taken me to detect one of these scumbags was 15 minutes, and an extra 5 minutes to make sure.
Some of them make mistakes that are beyond stupid, like identity theft of someone who was born, raised and graduated in a country whose main language they cannot speak.
The smartest ones either do not know when to stop answering your questions with perfect answers (they just do not know what they're supposed to not know), or fumble their delivery and end up looking like unauthentic puppets. You just keep grinding them until you catch em.
I'm sure it's not infallible, but that's inherent to hiring. The only problem with this is cost, you're going to need a senior+ dev running the interview, and IME most are not happy to do so. But this might just be what the price of admission for running a hiring pipeline for software devs is nowadays. Heck, now feels like a good time to start a recruitment process outsourcing biz focused on the software industry.
LLMs killed busy work. Now people have to actually talk to each other and they're finding out that we've been imitating functionality instead being functional.
It hasn't killed the interview, it's killed the career field. Most people just haven't realized this yet.
thank fuck. they are terrible. being interviewed by CTOs just out of university, with no experience for a senior in everything role. they ask you to do some lame assignment, a pet problem not once looking at 20 years of GitHub repos and open source contributions.
I miss one option from the list of non-solutions the author presents there - ditch the idiotic whiteboard/"coding exercise" interview style. Voila, the AI (non)problem solved!
This sort of comp-sci style exam with quizzes and what not maybe somewhat helps when hiring junior with zero experience fresh out of school.
But why are people with 20+ years of easily verifiable experience (picking up a phone and asking for references is still a thing!) being asked to invert trees and implement stuff like quicksort or some contrived BS assignment the interviewer uses to boost their own ego but with zero relevance to the day to day job they will be doing?
Why are we still wasting time with this? Why is always the default the assumption there that the applicants are all crooked hochstaplers that are lying on their resumes?
99% of jobs come with probationary period anyway where the person can be fired on the spot without justification or any strings attached. That should be more than enough time to see whether the person knows their stuff or not after having passed one or two rounds of oral interviews.
It is good enough for literally every other job - except for software engineering. What makes us the special snowflakes that people are being asked to put up with this crap?
AI did not kill tech interviews and anyone saying so is probably not interviewing correctly or trying to outsource the work to one of these other companies.
I’ve run interviews in the last year where I told people they were free to use LLMs in the initial coding (which we don’t watch in realtime) but when they are explaining the code to us we ask them to make a change (a tiny additional feature, 2-6 lines of code depending on how they originally solved it). The people who can’t are immediately disqualified (in our heads, obviously we were cordial in the meeting). Just to note, we let them use LLMs for the change too, it doesn’t help the people who are completely relying on the LLM.
You can use LLMs all you want IF, and only if, you understand what it’s outputting and can build on top of that and/or modify the code it wrote.
As for the deepfake video stuff, ehh, I don’t know as I’ve never seen it “in person” (only videos of how it could look) so I’m not sure if I would notice it or not. We have multiple interviews and I’m confident that speed of answers would give away someone using AI or they would say something stupid the AI hallucinated. That confidence comes from talking to LLMs extensively. They can BS for a little bit but then it becomes clear they full of it.
Also LLMs aren’t going to answer questions like “tell me about a project you worked on at a previous company” or “describe a time where you had to fix production issue and how you went about it”. I had many people trip up here and be unable to describe a past project with any level of detail.
Bottom line, I ask a lot of questions a LLM cannot answer well (not on purpose, these questions were written prior to thinking about LLMs/AI and I ask for people to make a minor edit to code “they” just wrote.
Lastly, the interview process is only the first step. What are these people doing after they get hired? Are they continuing to pay for AI tools to help them lie about who they are? it sure seems like their skill, or rather lack thereof, will become readily apparent even if somehow they made it past the interview process (which again I don’t understand unless you have a bad interview process).
Never really liked leetbro interviews. Always reeked of “SO YOU THINK YOU CAN CODE BRO? SHOW ME WHAT YOU GOT!” The majority of my work over 10+ years of experience always relied on general problem solving and soft skills like collaborating with others. Not rote memorization of in order traversal.
No, it has not. We still have the same situation.
here's the question, here's the code that ChatGPT produced. what's wrong with it?
> Tech interviews are one of the worst parts of the process and are pretty much universally hated by the people taking them.
True.
> One of the things we can do, however, is change the nature of the interviews themselves. Coding interviews today are quite basic, anywhere from FizzBuzz, to building a calculator. With AI assistants, we could expand this 10x and have people build complete applications. I think a single, longer interview (2 hours) that mixes architecture and coding will probably be the way to go.
Oh.... yeah, that sounds just... great.
No it didn't, you just need to stop asking questions an LLM can easily solve, most of those were probably terrible questions to begin with.
I can create a simple project with 20 files, where you would need to check almost all of them to understand the problem you need to solve, good luck feeding that into an LLM.
Maybe you have some sneaky script or IDE integration that does this for you, fine, I'll just generate a class with 200 useless fields to exhaust your LLM's context length.
Or I can just share my screen and ask you to help me debug an issue.
I cannot emphasize this enough. Coding is the EASY part of writing software. You can teach someone to code in a couple of months. Interviews that focus on someone's ability to code are just dumb.
What you need to do is see how well they can design before writing software. What is their process for designing the software they make? Can they architect it correctly? How do they capture user's mental models? How do they deal with the many "tops" that software has?
It's a tricky subject, because what if people who use AI are just better together? And what if in a year from now, AI by itself is better? What's the point of hiring anyone? Perhaps this is the issue behind the problems being described, which might be mere symptoms. There are tons of very smart teams working on software that will basically replace the people you're hiring.
None of this makes any sense. Why should I complete a tech test interview if I have 15 years of experience at X top firm? I would have done it already anyway.
I had a ‘principal engineer’ at last place who grinded leetcode for 100 days and still failed a leetcode interview. It’s utter nonsense.
A conversation with technical questions and topics should suffice. Hire fast and fire people.
I know nobody likes doing tech interviews but how has AI killed it ? Anyways you do want to know basics of computer science, it is a helpful thing to know if you ever want to progress beyond CRUD shitshovelling.
Also wtf is inverting a binary tree ? Like doing a "bottom-view". That shit is easy.
What a BS article. As they say, just do the interview in person. Problem solved. Not sure about the US but 99% of jobs here in Spain are hybrid or onsite ("presencial"), not fully remote.
They're acting like all jobs are remote and it's impossible to do an interview in person.
Also, does it really matter? If a person is good at using AI and manages to be good at creating code with that, is it really so much worse than a person that does it from the top of their head? I think we have to drop the idea that AI is going to go away. I know it's all overhyped right now but there is definitely something to it. I think it will be another tool in our toolboxes. Just like stackoverflow has been for ages (and that didn't kill interviews either).
Come on - it was already dead for a long time.
"Video killed the radio star"
[dead]
[dead]
The best interview process I've ever had was going to work with former coworkers, aka no real process. A couple of quick calls with new people who deferred strongly to the person who knew me, my work, and my values. Nothing else has the signal value.
Of course the problem is this can't scale or be outsourced to HR, but is this a bug or a feature?