I had a weird LLM use instance happen at work this week, we were in a big important protocol review meeting with 35 remote people and someone asks how long IUDs begin to take effect in patients. I put it in ChatGPT for my own reference and read the answer in my head but didn't say anything (I'm ops, I just row the boat and let the docs steer the ship). Anyone this bigwig Oxford/Johns Hopkins cardiologist who we pay $600k a year pipes up in the meeting and her answer is VERBATIM reading off the ChatGPT language word for word. All she did was ask it the answer and repeat what it said! Anyway it kinda made me sad that all this big fancy doctor is doing is spitting out lazy default ChatGPT answers to guide our research :( Also everyone else in the meeting was so impressed with her, "wow Dr. so and so thank you so much for this helpful update!" etc. :-/
>her answer is VERBATIM reading off the ChatGPT language word for word
How could it be verbatim the same response you got? Even if you both typed the exact same prompt, you wouldn't get the exact same answer.[0, 1]
[0] https://kagi.com/assistant/8f4cb048-3688-40f0-88b3-931286f8a...
[1] https://kagi.com/assistant/4e16664b-43d6-4b84-a256-c038b1534...
The one thing a cardiologist should be able to do better than a random person is verify the plausibility of a ChatGPT answer on reproductive medicine. So I guess/hope you're paying for that verification, not just the answer itself.
Or both the doctor and ChatGPT were quoting verbatim from a reputable source?
The LLM may well have pulled the answer from a medical reference similar to that used by the dr. I have no idea why you think an expert in the field would use ChatGPT for a simple question, that would be negligence.