logoalt Hacker News

aljgzyesterday at 5:49 PM0 repliesview on HN

Too deep of a topic for the comments section.

I totally agree to your point, and want to mention that the reverse is *also* important. Using just "intention", but these apply to emotions, etc

A lot of our interaction with AI is under an intention. That's what directs the interaction, and it's interpreted according to its alignment to the intention.

Then it's important to remember that our current (publicly known) implementation of AI does not have an explicit intention mechanism. An appearance of intention can emerge out of the statistical choices, and the usual alignment creates the association of the behavior with intention, not much different from how we learn to imagine existence of a "force" that pulls things down well before we learn physics and formalize that imagination in one of the several ways.

This appearance helps reduce the cognitive load when interpreting interactions, but can be misleading as well, and I've seen people attribute intention to AI output in some situations where simple presence of some information confused the LLM into a path. Can't share the exact examples (from work), but imagine that presence of an Italian food in a story leads the LLM to assume this happens in Italy, while there are important signs for a different place. The LLM does not automatically explore both possibilities, unless asked. It chooses one (Italy in this case), and moves on. A user no familiar with "Attention" interprets based on non-existent intentions on the LLM.

I found it useful to just tell them: the LLM does not have an intention. It just throws dice, but the system is made in a way that these dice throws are likely to generate useful output.