Code reviews.
Teams are really sleeping on code reviews as an assessment tool. As in having the candidate review code.
A junior, mid, senior, staff are going to see very different things in the same codebase.
Not only that, as AI generated code becomes more common, teams might want to actively select for devs that can efficiently review code for quality and correctness.
I went through one interview with a YC company that had a first round code review. I enjoyed it so much that I ended up making a small open source app for teams that want to use code reviews: https://coderev.app (repo: https://github.com/CharlieDigital/coderev)
Company A wants to hire an engineer, an AI could solve all their tech interview questions, so why not hire that AI instead?
There's very likely a real answer to that question, and that answer should shape the way that engineer should be assessed and hired.
For example, it could be that the company wants the engineer to do some kind of assessment whether a feature should be implemented at all, and if yes, in what way. Then you could, in an interview, give a bit of context and then ask the candidate to think out loud about an example feature request.
It seems to me the heart of the problem is that companies aren't very clear about what value the engineers add, and so they have trouble deciding whether a candidate could provide that value.
> Using apps like GitHub Co-pilot and Cursor to auto-complete code requires very little skill in hands-on coding.
this is a crazy take in the context of coding interviews. first, because it's quite obvious if someone is blindly copy and pasting from cursor, for example, and figuring out what to do is a significant portion of the battle, if you can get cursor to solve a complex problem, elegantly, and in one try, the likelihood that you're actually a good engineer is quite high.
if you're solving a tightly scoped and precise problem, like most coding interviews, the challenge largely lies in identifying the right solution and debugging when it's not right. if you're conducting an interview, you're also likely asking someone to walk through their solution, so it's obvious if they don't understand what they're doing.
cursor and copilot don't solve for that, they make it much easier to write code quickly, once you know what you're doing.
I was asked by an SME to code on a whiteboard for an interview (in 2005? I think?). I asked if I could have a computer, they said no. I asked if I would be using a whiteboard during my day-to-day. They said no. I asked why they used whiteboards, they said they were mimicking Google's best practice. That discussion went on for a good few minutes and by the end of it I was teetering on leaving because the fit wasn't good.
I agreed to do it as long as they understood that I felt it was a terrible way of assessing someone's ability to code. I was allowed to use any programming language because they knew them all (allegedly).
The solution was a pretty obvious bit-shift. So I wrote memory registers up on the board and did it in Motorola 68000 Assembler (because I had been doing a lot of it around that time), halfway through they stopped me and I said I'd be happy to do it again if they gave me a computer.
The offered me the job. I went elsewhere.
I've accidentally been using an AI-proof hiring technique for about 20 years: ask a junior developer to bring code with them and ask them to explain it verbally. You can then talk about what they would change, how they would change it, what they would do differently, if they've used patterns (on purpose or by accident) what the benefits/drawbacks are etc. If they're a senior dev, we give them - on the day - a small but humorously-nasty chunk of code and ask them to reason through it live.
Works really well and it mimics the what we find is the most important bit about coding.
I don't mind if they use AI to shortcut the boring stuff in the day-to-day, as long as they can think critically about the result.
Nowadays I am on the other part of the fence, I am the interviewer. We are not a FAANG, so we just use a SANE interview process. Single interview, we ask the candidate about his CV and what his expectations are, what are his competences and we ask him to show us some code he has written. That's all. The process is fast and extremely effective. You can discriminate week candidates in minutes.
What I've been thinking about leetcode medium/hard as a 30-45 minute tech interview (as there are a few minutes of pleasantry and 10 minutes reserved for questions), is that you are only really likely to reveal 2 camps of people—taking in good faith that they are not "cheating". One who is approaching the problem from first principles and the other who knows the solution already.
Take maximum subarray problem, which can be optimally solved with Kadane's algorithm. If you don't know that, you are looking at the problem as Professor Kadane once did. I can't say for sure, but I suspect it took him longer than 30-45 minutes to come up with his solution, and I also imagine he didn't spend the whole time blabbering about his thought process.
I often see comments like: this person had this huge storied resume but couldn't code their way out of a paper bag. Now having been that engineer stuck in a paper bag a few times, I think this is a very narrow way to view others.
I don't know the optimal way to interview engineers. I do know the style of interview that I prefer and excel at[0], but I wouldn't be so naive to think that the style that works for me would work for all. Often I chuckle about an anecdote from the fabled I.P. Sharp: Ian Sharp would set a light meter on his desk and measure how wide an interviewees eyes would get when he explained to them about APL. A strange way to interview, but is it any less strange than interviewing people via leetcode problems?
0: I think my ideal tech screen interview question is one that 1) has test cases 2) the test cases gradually ramp up in complexity 3) the complexity isn't revealed all at once; the interviewer "hides their cards," so to speak 4) is focused on a data structure rather than an algorithm such that the algorithm falls out naturally rather than serves as the focus. 5) Gives the opportunity for the candidate to weigh tradeoffs, make compromises, and cut corners given the time frame. 6) Doesn't combine big ideas (i.e. you shouldn't have to parse complex input and do something complicated with it); pick a single focus. Interviews I have participated and enjoyed like this: construct a Set class (union, difference, etc); implement an rpn calculator (ramp up the complexity by introducing multiple arities); create a range function that works like the python range function (for junior engineers, this one involves a function with different behavior based on arity).
The current job market is so messed up that I honestly can't see myself getting a job until we hit a wall and people start using their brains again.
I have 26 years of solid experience, been writing code since I was 8.
There should be a ton of companies out there just dying to hire someone with that kind of experience.
But I'm not perfect, no one is; and faking doesn't work very well for me.
I mostly skipped the technical questions in the last few interviews I have conducted. I have a conversation, ask them about their career, about job changes, about hobbies, what they do after work. If you know the subject, skilled people talk a certain way, whether it is IT, construction, sailing.
I do rely on HR having, hopefully, done their job and validated the work history.
I do have one technical question that started out as fun and quirky but has actually shown more value than expected. I call it the desert island cli.
What are your 5 linux cli desert island commands?
Having a hardware background, today, mine are: vi, lsof, netcat, glances, and I am blanking on a fifth. We have been doing a lot of terraform lately
I have had several interesting responses
Manager level candidate with 15+ years hands on experience. He thought it was a dumb question because it would never happen. He became the teams manager a few months after hiring. He was a great manager and we are friends.
Manager level to replace the dumb question manager. His were all Mac terminal eye candy. He did not get the job.
Senior level SRE hire with a software background. He only needed two emacs and a compiler, he could write anything else he needed.
The problem isn't AI, the problem is companies don't know how to properly select between candidates, and they don't apply even the basics of Psychometrics. Do they do item analysis of their custom coding tests? Do they analyse the new hires' performances and relate them to their interview scores? I seriously doubt it.
Also, the best (albeit the most expensive) selection process is simply letting the new person to do the actual work for a few weeks.
My last interview, for the job I'm currently employed in, asked for a take home assignment where I was allowed to use any tool I'd use regularly including AI. Similar process for a live coding interview iterating on the take home that followed. I personally used it to speed up wirting initial boilerplate and test suites.
I fail to see why this wouldn't be the obvious choice. Do we disallow linters or static analysis on interviews? This is a tool and checking for skill and good practices using it makes all sense.
There’s no other industry* that interviews experienced people the way we do. So maybe just do what everyone else does.
Everyone is so terrified of hiring someone that can’t code, but the most likely bad hires and the most damaging bad hires are bad because of things that have nothing to do with raw coding ability.
*Except the performing arts. The way we interview is pretty close to the way musicians are interviewed, but that’s also really similar to their actual job.
I've been arguing that "AI" has very little impact on meaningful technical interviews, that is ones that don't test for memorization of programming trivia: https://blog.sulami.xyz/posts/llm-interviews/
Prediction: faangs will come up with something clever or random or just fly everyone onsite, they are so rich and popular, they can filter by any arbitrary criteria.
Second-rate companies will keep some superficial coding, but will start to emphasize more of the verbal parts like system design and retrospective. Which sucks, because those are totally subjective and mostly filters for whoever can BS better on the spot and/or cater to the interviewer's mood and biases better.
My favorite still: in-person pair programming for a realistic problem (could be made-up or shortened, but similar to the real ones on the job). Use whatever tools you want, but get the correct requirements and then explain what you just did, and why.
A shorter/easier task is to code review/critique a chunk of code, could even just print it out if in person.
This whole conversation is depressing me. when I left work a couple years ago due to health reasons, AI was just beginning to become a problem. Now, thanks to a clinical study, I may possibly be able to return to work, and it sounds like the industry has changed drastically.
Not looking forward to it.
How about paid internships as a way to filter candidates? As in, hire a candidate for a small period of time, like 2 weeks or something, and have them work on a real task with a full-time employee and use their performance on that to decide whether or not to hire.
I realize it's not easy for smaller companies to do, but I think it's the single best way to see if someone's fit for a job
Our tech screen is having the candidate walk me through a small project that I created to highlight our stack. They watch my screen and solve a problem to get the app running then they walk me through a registration flow from the client to the server and then returning to the client. There are no gotchas but there are opportunities to improve on the code (left unstated... some candidates will see something that is suboptimal and ask why or suggest some changes).
We get to touch on client and browser issues, graphQL, Postgres, Node, Typescript (and even the various libraries used). It includes basic CRUD functionality, a third party API integration, basic security concerns and more. It's just meant to gauge a minimal level of fluency for people that will be in hands on keyboard roles (juniors up to leads, basically). I don't think anyone has found a way to use AI to help them (yet) but if this is too much for them they will quickly fall flat in the day to day job.
Where we HAVE encountered AI is in the question/answer portion of the process. So far many of those have been painfully obvious but I'm sure others were cagier about it. The one incident that we have had that kind of shook us was when someone used a stand-in to do the screen (he was fantastic, lol) and then when we hired him it took us about a week to realize that this was a team of people using an AI avatar that looked very much like the person we interviewed. They claimed to be in California but were actually in India and were streaming video from a Windows machine to the Mac we had supplied for Teams meetings. In one meeting (as red flags were accumulating) their Windows machine crashed and the outline of the person in Teams was replaced by the old school blue screen of death.
I'm someone who hated leetcode style interviews for the longest, but I'm starting to come around on them. I get that these style of questions are easy to game, but I still think they have _some_ value. The point of these style of questions was supposed to test your ability to problem solve and come up with a good solution given the tools you knew. That being said, I don't think every company should be using this type of question for their interviews. I think leetcode style questions should be reserved for companies that are pushing the boundary of the industry since they're exploring charted territory and need people who can come up with unique solutions to problems no one really knows. I think most companies would be fine with some kind of pairing problem since most people are probably solving engineering problems instead of computer science problems. But none of this matters, since, we all know that even if we went that direction as an industry, the business people would fuck it up some how anyways.
I had no idea people took hackerrank as a serious signal rather than as a tool for recent graduates to track interview prep progress. Surely it has all the same issues AI does: you have no way of verifying that the person who takes the interview actually is responsible for that signal.
I don't see AI as a serious threat to the interview process unless your interview process looks a lot like hackerrank.
I feel like we, SWEs, have been over-engineering our interview process. Maybe it's time to simplify it, for example, just ask questions based on the candidate's resume instead of coming up with random challenges. I feel like all the new proposals seem overly complicated, and nobody, interviewer or interviewee, is happy with any of them.
Licensing. We do the Leetcode interview in a controlled testing center. When you apply for a position, I look up your license number, then I know you can leetcode without wasting any of my developer resources on determining that.
thank fuck. they are terrible. being interviewed by CTOs just out of university, with no experience for a senior in everything role. they ask you to do some lame assignment, a pet problem not once looking at 20 years of GitHub repos and open source contributions.
When it comes to interviews I generally stick to asking fairly easy questions but leaving out key details, I care a lot more about candidates asking following ups and talking through the code they are writing over what code they actually produce. If a candidate can ask questions when they don't understand something and talk through their thought process then they are probably going to be a good person to work with. High level design questions are often pretty valuable I find as well, which I usually don't require code for I just ask them to talk through their ideas of how they would design an application.
In my uni days, I respected professors who designed exam in a way where students can utilize whatever they could to complete the assignment, including internet, their notes, calculators, etc.
I think the same applies to good tech interview. Company should adapt hiring process to friend with AI, not fight.
nah ai killed stupid tech interviews. you can easily get an idea of someones competence by literally just talking to them instead of making them do silly homework exercises and testing their rote memorisation abilities.
So, when AI can pass the tech interview seamlessly, I guess we can just hire it?
Maybe the future will be human shills pretending to be job candidates for shady AI “employment agencies” that are actually just (literally) skinning gpt6 apis that sockpuppet minimum wage developing nation”hosts”?
It’s simple don’t have a tech interview that does not relate to the job.
Show code, ask questions about it that requires opinion.
I don't think it did, if anyone cares. The way I've been advocating to my colleagues who are concerned about "cheating" is that there's probably a problem with the interview process. I prefer to focus on the think, rather than the solve.
Collaborate, as opposed to just do.
Things that really tell me if I can work with that person and if together, we can make good things.
Stop remote tech interviews
Unless the job you're interviewing for is remote-only, this makes perfect sense. If you expect your candidates to be able to work in your office, they should be interviewed there.
I think that a mythology about where the difficulty in working with computers lies has made the relationship between businesses and the people they hire to do this stuff miserable for quite some time
"Coding", as in writing stuff in programming languages with correct syntax that does the thing asked for in isolation, has always been a very dumb skill to test for. Even before we had stackoverflow syntactic issues were something you could get through by consulting a reference book or doing some trial and error with a repl or a compiler. That this is faster now with internet search and LLMs is good for everyone involved, but the fact that it's not what matters remains
The important part of every job that gets a computer to do a thing is a combination of two capabilities: Problem-solving, that is, understanding the intended outcome and having intuition about how to get there through whatever tools are available, and frustration tolerance: The ability to keep trying new* stuff until you get there
Businesses can then optimize for things like efficiency or working well with others once those constraints are met, but without those capabilities you simply can't do the job, so they're paramount. The problem with most dinky little coding interviews wasn't that you could "cheat", it's thst they basically never tested for those constraints by design, though some clever hiring people manage to tweak them to do so on an ad hoc basis sometimes
* important because a common frustration failure mode is repetitive behavior. Try something. Don't understand why it doesn't work. Get more frustrated. Try the same thing again. Repeat
Funny enough, the songs from the website Coding For Nothing about grinding LeetCode and endless take-home projects seem very relevant, and everything nowadays feels like a meme.
Tech interviewing has become a weird survival game, and now AI is flipping the rules again. If you need a laugh: https://codingfornothing.com
One option is to make the interviews harder and let candidates use ai to see how they can work with ai and actually build working product. They will be using ai in the job anyway so let them use it instead of asking stupid algorithm questions to sort an array
So there’s AI that’s really good at doing the skills we’re hiring for. We want you to not use AI so we can hire you for a job that we’re saying we’re going to replace with AI. Sounds like a great plan.
Maybe we don't need employers. Maybe we need a bunch of 1-person companies. I don't think AI is yet the force multiplier that makes that feasible for the masses, but who knows what things will look like in a few years.
I think code design can often cover just as much as actual code anyway. Just describe to me how your solve it, the interfaces you'd use, and how you'd show me you solved it.
As an interviewee it's insane to me how many jobs I have not gotten because of some arbitrary coding problem. I can confidently say after having worked in this field for over a decade and at a FAANG that I am a very capable programmer. I am considered one of the best on every team I've been on. So they are definitely selecting the wrong people IMO.
> What are our options?
* Take a candidate's track record into account. Talk with them about it.
* Show that you're experienced yourself, by being able to tell something about what someone would be like to work with, by talking with them.
* Get a reputation for your company not tolerating dishonesty. If someone cheats in an interview and gets caught, they're banned there, all the interviewers will know, and the cheater might also start to get a reputation beyond that company. (Bonus: Company reputation for valuing honesty is attractive to people who don't want dishonest coworkers.)
* Treat people like a colleague, trying to assess whether it's a good match. You're not going to be perfectly aligned (e.g., the candidate or the company/role might be a bit out of the other's league right now), but to some degree you both want it to be a good match for both parties. Work as far as you can with that.
(Don't do this: Leetcode hazing, to establish the dynamic of them being there to dance for your approval, so hopefully they'll be negged, and will seek your approval, won't think critically about how competent and viable your self/team/company are, and will also be less likely to get uppity when you make a lowball offer. Which incidentally places the burden of rehearsing for Leetcode ritual performances upon the entire field, at huge cost.)
We did an experiment at interviewing.io a few months ago where we asked interviewees to try to cheat with AI, unbeknownst to their interviewers.
In parallel, we asked interviewers to use one of 3 question types: verbatim LeetCode questions, slightly modified LeetCode questions, and completely custom questions.
The full writeup is here: https://interviewing.io/blog/how-hard-is-it-to-cheat-with-ch...
TL;DR:
- Interviewers couldn't tell when candidates were cheating at all
- Both verbatim and slightly modified LeetCode questions were really easy to game with AI
- Custom questions were not gamable, on the other hand[1]
So, at least for now, my advice is that companies put more effort into coming up with questions that are unique to them. It's better for candidates because they get better signal about the work, it reduces the value asymmetry (companies have to put effort into their process instead of just grabbing questions from LeetCode etc), and it's better for employers (higher signal from the interview).
[1] This may change with the advent of better models
The death of shitty interviews has been greatly exaggerated.
AI might make e.g. your leetcode interview less predictive than it previously would have been. But was it predictive in the first place? I don't think most interviews are written by people thinking in those terms at all. If your method of interviewing never depended on data suggesting it actually, you know, worked in the first place, why would it matter if it starts working even worse?
Insofar as it makes the shittiness of those interviews more visible, the effect of AI is a good thing. An interview focused on recall of some specific algorithm was never predictive, it's just now predictive in a way that Generic Business Idiots can understand.
We frequently interview people who both (a) claim to have been in senior IC roles (not architect positions, roles where they are theoretically coding a lot) for many, many years and (b) cannot code their way out of a paper bag when presented with a problem that requires even a modicum of original reasoning. Some of that might be interview nerves, of course, but a lot of these people are not at all unconfident. They just...suck. And I wonder if what we're seeing is the downstream effects of Generic Business Idiots hiring primarily people who memorize stuff than people who build stuff.
The inconvenient truth is that everything circles back to in-person interviews.
The article addresses this:
>A lot of companies are doing RTO, but even companies that are 100% in-office still interview candidates from other cities. Spending money to fly every candidate out without an aggressive pre-screen is too wasteful.
No, accidently hiring someone who AI'd their way through the interview costs orders of magnitude more to undo. It's absolutely worth paying for a round trip flight and a couple days of accommodations.
1point3acres is massacring tech interviews right now. Having to pay $80/month to some China based website where NDA-protected interview questions are posted regularly, then being asked the same questions in the interview, seems insane.
It also feels like interviewers know this and assume you studied the questions, they seem incapable of giving hints, etc if you don't have the questions memorized.
AI is the least of it.
Very funny :) I too failed an interview at google, also related to binary search on a white board. I never write with pens. I'm on keyboards the whole time, my handwriting is terrible.
I've built a search engine for two countries and then I was failed by a guy that wears cowboy hats to work at google in Ireland. Not a lot of cows there I'm guessing. (No offence to any real cowboys that work at google of course).
I did like the free flight to Ireland though and the nice lunch. Though I was disappointed I lost "Do no evil" company booklet.
The best interview process I've ever had was going to work with former coworkers, aka no real process. A couple of quick calls with new people who deferred strongly to the person who knew me, my work, and my values. Nothing else has the signal value.
Of course the problem is this can't scale or be outsourced to HR, but is this a bug or a feature?
The best interview processes are chill, laid back, open ended.
That's the only way you're going to get relevant information.
I've been verified to the moon and back by Apple and others for roles that could never have worked.
The problem is that when it comes to the hiring process, everyone is suddenly an expert; no matter how dysfunctional, inhumane and destructive their ideas are.
Anyone who suggests a paired programming solution is right, and answering the wrong question. Unless/until we return to a covid-like market the process will never be optimized for the candidate, and this is just too expensive an approach for employers. In this market I think the answer is hire less.
one hiring manager told me they dont do code challenges. they said, "why would someone take a job they couldnt do?"
isnt it that simple?
I just ask to share a text editor and write down my questions. Its critical anyway because often then not its not always clear for tech questions what exactly i asked (linux command for example).
This blocks their screen too.
and yes we do know very soon if you look somewere else, take time or rephrase the question to get more time.
If you able to fake it, at that point you should just get th ejob anyway :P
Why don't we simply ask the AI how to conduct a tech interview nowadays?
Interestingly I find AI is actually better at that kind of CS whiteboard question (implementing a binary search tree) than that "connecting middlewares to API" type task. Or at least, it's more straightforward to apply the AI, when you want a set of functions with clearly defined interfaces written - rather than making a bunch of changes across existing files.
The best interview process I've ever been a part of involved pair programming with the person for a couple hours, after doing the tech screening having a phone call with a member of the team. You never failed to know within a few minutes whether the person could do the job, and be a good coworker. This process worked so well, it created the best team, most productive team I've worked on in 20+ years in the industry, despite that company's other dysfunctions.
The problem with it is the same curse that has rotted so much of software culture—the need for a scalable process with high throughput. "We need to run through hundreds of candidates per position, not a half dozen, are you crazy? It doesn't matter if the net result is better, it's the metrics along the way that matter!"