It is completely relevant, if you want reliable software that you use daily to continue running without a massive rewrite.
Before suggesting to use LLMs to completely rewrite this sort of software, there is a reason why compilers need to be certified to operate in safety critical environments. Not everything needs to use LLMs as the solution to a problem.
I would go as far to say that using an LLM in this context is the wrong solution and is irrelevant to critical systems. Maybe some here see everything as tokens and must solve everything in the form of using LLMs.
Rewriting a toy web app using LLMs from Javascript to Typescript is great, but isn't good for safety critical systems.
I agree with you. The question is: how the hell is this never discussed when assessing the economic potential of AI-driven disruption. I ask because I have the impression that all the really relevant industries are resistent to the current narrative. That said we had Claud helping bomb a school full of kids, you would guess the military would know better but no :/
Safety critical software is mostly a compliance dance that incidentally produces artifacts with lower defect rates than usual. LLMs can help with safety critical code as long as a human signs their name that they are responsible for its behavior.