[flagged]
If you’d read the whole thing, you would go on a debugging journey that both involved bypassing the LLM and was appropriate for HN (vs not dismissing the article), so you might want to do that.
It's not about LLMs doing math.
Uhh, that's not the article, the article is running a ml model, on phone and floating point opps for tensor multiplication seems to be off.
If you’d read the whole thing, you would go on a debugging journey that both involved bypassing the LLM and was appropriate for HN (vs not dismissing the article), so you might want to do that.