logoalt Hacker News

magicalhippoyesterday at 1:25 AM0 repliesview on HN

In the Physics of Language Models[1] they argue that you must augment your training data by changing sentences and such, in order for the model to be able to learn the knowledge. As I understand their argument, language models don't have a built-in way to detect what is important information and what is not, unlike us. Thus the training data must aid it by presenting important information in many different ways.

Doesn't seem unreasonable that the same holds in a gaming setting, that one should train on many variations of each level. Change the lengths of halls connecting rooms, change the appearance of each room, change power-up locations etc, and maybe even remove passages connecting rooms.

[1]: https://physics.allen-zhu.com/part-3-knowledge/part-3-1