I always believed that the AI/LLM/ML hysteria is misapplied to software engineering... it just happens to be a field adjacent to it, but not one that can very well apply it.
Medicine and Law, OTOH, suffers heavily from a fractal volume of data and a dearth of experts who can deal with the tedium of applying an expert eye to this much data. Imagine we start capturing ultrasound and chest xrays en masse, or giving legal advice for those who needs help. LLMs/ML are more likely to get this right, than writing computer code.
(My spouse was an ultrasound tech for many years.)
The problem with an example like ultrasound is that it's not a passive modality - you don't just take a sweep and then analyze it later. The tech is taking shots and adjusting as they go along to see things better in real time. There's all sorts of potential stuff in way often bowel and bones and you have to work around all that to see what you need to.
A lot of the job is actively analyzing what you're seeing while you're scanning and then going for better shots of the things you see, and having the experience and expertise to get the shots is the same skills required to analyze the images to know what shots to get. It's not just a matter of waving a wand around and then having the rad look at it later.
When AI writes nonsensical code, it's a problem, but not a huge one. But when ChatGPT hallucinates while giving you legal/medical advice, there are tangible, severe consequences.
Unless there's going to be a huge reduction in hallucinations, I absolutely don't see LLMs replacing doctors or lawyers.
100% agree ‘chat bots’ will not be a revolutionary technology, but other uses of the underlying technology will be. General robotics, pharmaceuticals, new matter… and eventually 1st line medicines and law sure, but I sure don’t want doctors to vibe diagnose me, or lawmakers to vibe legislate.
[Insert "let me laugh even harder" meme here]
That would be actual malpractice in either case.
LLMs have a history of fabricating laws and precedents when acting as a lawyer. Any advice from the LLM would likely be worse than just assuming something sensible, as that is more likely to reflect what the law is than what the LLM hallucinates it to be. Medicine is in many ways similar.
As for your suggestion to be capture and analyze ultrasounds and X-rays en-mass, that would be malpractice even if it were performed by an actual Doctor instead of an AI. We don't know the base rate of many benign conditions, except that they are always higher than we expect. The additional images are highly likely to show conditions that could be either benign or dangerous, and additional procedures (such as biopsies) would be needed to determine which it is. This would create additional anxiety in patients from the possible diagnosis and further pain and possible complications from the additional procedures.
While you could argue for taking these images and not acting on them, you would either tell the patients the results and leave them worried about what the discovered masses are (so they likely will have the procedures anyway) or you won't tell them (which has ethical implications). Good luck getting that past the institutional review board.
I don’t know what “Fractal volume of data” means exactly, but I think you’re underestimating how much more complicated biology is than software.
Well that is not how it is applied in the article at all
Somehow, LLMs always seem to be "more likely to get this right" for fields other than one's own (I suppose, this being HN). The term "Andy Grove Fallacy" coined by Derek Lowe (whose articles are frequently posted here, the term being referenced in a recent piece[1]) comes to mind...
[1] https://www.science.org/content/blog-post/end-disease