LLM technology will never achieve 100% accuracy in its output. There is an inherent non-determinism. Tasks that require 100% accuracy cannot be handled by LLMs alone. If an LLM is used to replace HR, it will inevitably do something wrong, and a human will need to be in the loop to correct it.
Same goes for chess, there will always be a chance that it makes an illegal move. Same goes for code, there will always be a chance that it produces the wrong code.
Maybe a new AI technology will be developed that doesn't have the innate non-determinism, but we don't have that now.