The question is how many decades each user of your software would have to use it in order to offset, through the optimisation it provides, the energy consumption you burned through with LLMs.
In the past week I made 4 different tasks that were going to make my m4brun at full tilt for a week optimized down to 20 minutes with just a few prompts. So more like an hour to pay off not decades. average claude invocation is .3 wh. m4 usez 40-60 watts, so 24x7x40 >> .3 * 10
Especially considering that suddenly everyone and their mother create their own software with LLMs instead of using almost-perfect-but-slighty-non-ideal software others written before.
I’m not really worried about energy consumption. We have more energy falling out of the sky than we could ever need. I’m much more interested in saving human time so we can focus on bigger problems, like using that free energy instead of killing ourselves extracting and burning limited resources.
When global supply chains are disrupted again, energy and/or compute costs will skyrocket, meaning your org may be forced to defer hardware upgrades and LLMs may no longer be cost effective (as over-leveraged AI companies attempt to recover their investment with less hardware than they'd planned.) Once this happens, it may be too late to draw on LLMs to quickly refactor your code.
If your business requirements are stable and you have a good test suite, you're living in a golden age for leveraging your current access to LLMs to reduce your future operational costs.