I strained my groin/abs a few weeks ago and asked ChatGPT to adjust my training plan to work around the problem. One of its recommendations was planks, which is exactly the exercise that injured me.
My cleaning lady's daughter had trouble with her ear. ChatGPT suggested injecting some oil into it. She did and it became a huge problem, so that she had to go to the hospital.
I'm sure ChatGPT can be great, but take it with a huge grain of salt.
This is one of the main dividing lines wrt LLM usage and dangers: Not just believing what it tells you and finding hard sources before acting.
For some people this is obvious, so much so that they wouldn't even mention it, while others have seen only the hype and none of the horror stories.