It all went south when we started to call it "learning" instead of "fitting parameters".
It was called learning already back when the field was called cybernetics and foundational figures like Shannon worked on this kind of stuff. People tried to decipher learning in the nervous system and implement the extracted principles in machines. Such as Hebbian learning, the Perception algorithm etc. This stuff goes back to the 40s/50s/60s, so things must have gone south pretty early then.
I agree with ya so much. I have seen so many people even in hackernews somehow give human qualities to LLM's.
This Grammarly thing seems to be a bastardized form of that not even sparing the dead.
I'd say that there was some incentive by the AI companies to muddle up the water here.
„Fitting“ is still too nice of a word choice, because it implies that it’s easy to identify the best solution.
I suggest „randomly adjusting parameters while trying to make things better“ as that accurately reflects the „precision“ that goes into stuffing LLMs with more data.