I went to look at some of the authors other posts and found this:
https://www.antifound.com/posts/advent-of-code-2022/
So much of our industry has spent the last two decades honing itself into a temple built around the idea of "leet code". From the interview to things like advent of code.
Solving brain teasers, knowing your algorithms cold in an interview was always a terrible idea. And the sort of engineers it invited to the table the kinds of thinking it propagated were bad for our industry as a whole.
LLM's make this sort of knowledge, moot.
The complaints about LLM's that lack any information about the domains being worked in, the means of integration (deep in your IDE vs cut and paste into vim) and what your asking it to do (in a very literal sense) are the critical factors that remain "un aired" in these sorts of laments.
It's just hubris. The question not being asked is "Why are you getting better results than me, am I doing something wrong?"
My career predates the leetcode phenomenon, and I always found it mystifying. My hot take is that it’s what happens when you’re hiring what are essentially human compilers: they can spit out boilerplate solutions at high speed, and that’s what leetcode is testing for.
For someone like that, LLMs are much closer to literally replacing what they do, which seems to explain a lot of the complaints. They’re also not used to working at a higher level, so effective LLM use doesn’t come naturally to them.
> The complaints about LLM's that lack any information about the domains being worked in, the means of integration (deep in your IDE vs cut and paste into vim) and what your asking it to do (in a very literal sense) are the critical factors that remain "un aired" in these sorts of laments.
I'm not sure if this is a direct response to the article or a general point. The article includes an appendix about my use of LLMs and the domains I have used them in.