re #2: Do people call it thinking, or is it just clever marketing from AI companies, that whenever you ask a question and it repeatedly prints out "...thinking...", as well as offering various modes with the word "thinking" written somewhere.
The AI companies obviously want the masses to just assume these are intelligent beings who think like humans and so we can just trust their output as being truthful.
I have an intelligent IT colleague who doesn't follow the AI news at all and who has zero knowledge of LLMs, other than that our company recently allowed us limited Copilot usage (with guidelines as to what data we are allowed to share). I noticed a couple weeks ago that he was asking it various mathematical questions, and I warned him to be wary of the output. He asked why, so I asked him to ask copilot/chatGPT "how many r letters are in the word strawberry". Copilot initially said 2, then said after thinking about it, that actually it was definitely 3, then thought about it some more then said it can't say with reasonable certainty, but it would assume it must be 2. We repeated the experiment with completely different results, but the answer was still wrong. On the 3rd attempt, it got it right, though the "thinking" stages were most definitely bogus. Considering how often this question comes up in various online forums, I would have assumed LLM models would finally get this right but alas, here we are. I really hope the lesson instilled some level of skepticism to just trust the output of AI without first double-checking.