> Health is designed to support, not replace, medical care. It is not intended for diagnosis or treatment.
I suspect that will be legally-tested sooner than later.
And just like the CSAM of Grok, it will be exempt from consequences
It also doesn't make any sense. It's like self-driving cars that require you to pay attention at all times anyway.
How is this NOT a class action suit in the making?
As long as the liability precedents set by prior case law and current regulations hold, there should be no problem. OpenAI and the hordes of lawyers working for and with them will have ensured that every appropriate and legally required step has been taken, and at least for now, these are software tools used by individuals. AI is not an agent of itself or the platform hosting it; the user's relative level of awareness of this fact shouldn't be legally relevant as long as OpenAI doesn't make any claims to the contrary.
You also have to imagine that they've got their zero guardrails superpowered internal only next generation bot available to them, which can be used by said lawyer horde to ensure their asses are thoroughly covered. (It'd be staggeringly stupid not to use their AI for things like this.)
The institutions that have artificially capped levels of doctors, strangled and manipulated healthcare for personal gain, allowed insurance and health industries to become cancerous - they should be terrified of what's coming. Tools like this will be able to assist people with deep, nuanced understanding of their healthcare and be a force multiplier for doctors and nurses, of which there are far too few.
It'll also be WebMD on steroids, and every third person will likely be convinced they have stereochromatic belly button cancer after each chat, but I think we'll be better off, anyway.