There’s a need for something to understand those things, but from a user perspective it doesn’t matter if that’s a human or an LLM.
Besides, loads of software that’s human written has people maintaining it that don’t understand what it’s doing or how it interacts with other systems or where bugs or edge cases will show up etc etc too. In fact I’d say most software is like this.
LLM’s probably make less security holes than humans at this point.