The house is poorly put together cause the carpenter used a cheap nail gun and a crappy saw.
LLMs are confidently wrong and make bad engineers think they are good ones. See: https://en.wikipedia.org/wiki/Dunning–Kruger_effect
If you're a skilled dev, in an "common" domain, an LLM can be an amazing tool when you integrate it into your work flow and play "code tennis" with it. It can change the calculus on "one offs", "minor tools and utils" and "small automations" that in the past you could never justify writing.
Im not a Lawyer, or a Doctor. I would never take legal advice or medical advice from an LLM. Im happy to work with the tool on code because I know that domain, because I can work with it, and take over when it goes off the rails.
It is hard to test LLM legal/medical advice without risk of harm, but it is often exceedingly easy to test LLM generated code. The most aggravating thing to me is that people just don't. I think the best thing we can do is to encourage everyone who uses/trusts LLMs to test and verify more often.