The pattern matching engine does not want anything.
If the training data gives incentives for the engine to generate outputs that reduce negative reaction by sentiment analysis, this may generate contradictions to existing tokens.
"Want" requires intention and desire. Pattern matching engines have none.
You misread.
I didn't say the pattern matching engine wanted anything.
I said the pattern matching engine matched the pattern of wanting something.
To an observer the distinction is indistinguishable and irrelevant, but the purpose is to discuss the actual problem without pedants saying "actually the LLM can't want anything".
Its not patterns engine. It's a association prediction engine.
I wish (/desire) a way to dispel this notion that the robots are self aware. It’s seriously digging into popular culture much faster than “the machine produced output that makes it appear self aware”
Some kind of national curriculum for machine literacy, I guess mind literacy really. What was just a few years ago a trifling hobby of philosophizing is now the root of how people feel about regulating the use of computers.