logoalt Hacker News

nine_klast Saturday at 3:33 PM1 replyview on HN

Are there some well-known examples of success in it?


Replies

thethimblelast Saturday at 8:25 PM

Vision transformers effectively encode a grid of pixel patches. It’s ultimately a matter of ensuring the position encoding incorporates both X and Y and position.

For LLMs we only have one axis of position and - more importantly - the vast majority of training data only is oriented in this way.