> AI is threatening to replace us (or, at least, the most joyful aspects of our craft)
I don’t like this either but every time I use LLMs it feels like we’re talking about completely different things. It moves waaay to fast and makes bad decisions at every turn, if I accepted them all it would be complexity deadlock within a week tops. Pooping out boilerplate sure but then you’re generally holding it wrong anyway (or there’s an opportunity to automate things). Plus even if you don’t have the time to automate it, sure, but then are you enjoying the act of shitting out your own boilerplate?
Out of the things I consider fun the LLM is at best a good rubber duck. It needs constant hand-holding, sometimes polluting the context window (and physical space) with a barrage of poorly written code. Code is bad, we’re trying to minimize it. No? At least that’s how I think: what’s the minimum* amount of code that can solve this problem?
*: minimum in a brain complexity sense, not char count. They correlate strongly though
I find using the LLM as a rubber duck helpful, even if, no, especially if I do not even end up pressing send. Just writing out the problem leads me to the answer.