logoalt Hacker News

yobboyesterday at 2:48 PM0 repliesview on HN

LLMs produce human readable output because they learn from human readable input. It's a feature. It allows it to be much less precise than byte code, for example, which wouldn't help at all.