Your analogy doesn't work.
This patient saw 6 doctors over 2 years and they all misdiagnosed them. They were taking incredibly strong medication for the wrong diagnosis.
GPT3 in 30 seconds gave 10 diagnosis and a doctor looked up the ones they didn't know. Saw one that fit, did the confirmation test, and the patient got the surgery needed.
Cheaper and better.
Any chance you can link to the reference I can read further about this?
The doctor did the diagnosis using the language model as a tool then. Impressive, but I think it’s false to imply GPT did the diagnosing.