logoalt Hacker News

mhovdtoday at 9:47 AM4 repliesview on HN

The risk to benefits ratio of introducing a language model to interpret so clear signals is nowhere near justified.

Monitoring and analytics is important, but it is a solved problem. A language model will only be able to hallucinate about the relationship between meals and glycemic response. At best it does no harm, at worst it can directly misinform.


Replies

pimeystoday at 10:23 AM

Yep. The oref1 algorithm is amazing and proven to make diabetic's quality of life better, AND SAFE. I don't understand why would you need to add AI to that mix.

But I will check this algo out. Maybe it has some interesting bits.

wg0today at 11:12 AM

Thanks for calling out!

We're even yet debating and trying to understand what impact AI has on software engineering and quality let alone putting AI into something that's directly linked to a human's well being.

show 1 reply
nonameiguesstoday at 12:22 PM

That's just risk/benefit to the user. As the developer, I'd be concerned that publicly distributing and marketing this, even with a GPL "no warranty" license and even free to the user, is illegal.

show 1 reply
AnthonBergtoday at 10:42 AM

My experience is completely the opposite, of using LLMs to pattern match and cast diagnostic nets.

Is your perspective based on, say, opinionated principle?, or experience?

The benefits are enormous.

The risks; What risks? No diabetic with baseline adult competence is going to drive their insulin-delivery vehicle off a cliff because some app said so.

show 5 replies