What I wonder is: are current LLMs even good for the type of work he does: novel, low-level, extremely performant
As a professional C programmer, the answer seems to be no; they are not good enough.
This is a funny one because on the one hand the answer is obviously no, it's very fiddly stuff that requires a lot of umming and ahhing, but then weirdly they can be absurdly good in these kinds of highly technical domains precisely because they are often simple enough to pose to the LLM that any help it can give is actually applicable immediately whereas in a comparatively boring/trivial enterprise application there is a vast amount of external context to grapple with.
From my experience, it's just good enough to give you a code overview of a codebase you don't know and give you enough implementation suggests to work from there.
If Fabrice explained what he wanted, I expect the LLM would respond in kind.
No
I doubt it, although LLMs seem to do well on low-level (ASM level instructions).
I'm writing C for microcontrollers and ChatGPT is very good at it. I don't let it write any code (because that's the fun part, why would I), but I discuss with it a lot, asking questions, asking to review my code and he does good. I also love to use it to explain assembly.