When AI writes nonsensical code, it's a problem, but not a huge one. But when ChatGPT hallucinates while giving you legal/medical advice, there are tangible, severe consequences.
Unless there's going to be a huge reduction in hallucinations, I absolutely don't see LLMs replacing doctors or lawyers.