logoalt Hacker News

kristiandupontyesterday at 6:59 AM1 replyview on HN

I agree that this isn't a very interesting example, but your statement is: "just asking the model to do a simple transform". If you assert that it understand when you ask it things like that, how could anything it produces not fall under the "already in the model" umbrella?


Replies

kube-systemyesterday at 1:42 PM

I didn't say it wasn't an interesting example -- i said it wasn't an example of LLMs generating things they have not seen before.

> how could anything it produces not fall under the "already in the model" umbrella

It doesn't. That is the point of my comment.