I’ve wondered why GenAI text has so many emojis, for example in README.md bullet points.
I guess their RLHF data had it? On purpose? And various labs all the same?
Because if they were just learning from web data (pre- a few years ago), this didn’t seem to be very prevalent.
It's to appeal to the lowest common denominator.
My guess has been that it's been trained on a copiuous amount of JavaScript projects, which always seem to have emoji up the wazoo everywhere.
Emoji and bullet points are easy to read, so it got rewards in RLHF process.
You maybe hate this style at first glance. But if you read lots of text everyday, Emoji and bullet points lower the cognitive load.
lots of normal people like emoji. the kind of normal people who have never heard of hacker news
The emojis and similar style is because models are learning from other models, as it is the easiest way to have RLHF data.
Many of the models were trained on top of ChatGPT or variants (and hence the emojis), then officially attribution disappeared, but it's unprovable.
This process is called distillation.
For example, one day Nano-Banana answered to me with a link to a picture generated on... FAL platform (that did not exist).
https://i.redd.it/7nkucg2qelfe1.png https://www.reddit.com/r/OpenAI/comments/1e34tkr/why_is_clau... https://cdn.arstechnica.net/wp-content/uploads/2023/12/GA8PG... but most has been fixed since Gemini 1.5-ProOver time this is fading because now they have their own trained output, and all these companies actively replace references to OpenAI, and distilled, mixed with other training data, their own, cleaned up, distilled, so the source text disappeared.
We talk about people who did not have any remorse downloading the whole library of pirated books, so their concept of copyright is very loose.