a machine could, this machine cannot
I was more trying to add an interesting philosophical perspective than to comment on this particular instance
As we build and understand them now there's pretty good structural reasons to believe that LLMs cannot be tweaked or tuned to be more than incidentally truthful.
As we build and understand them now there's pretty good structural reasons to believe that LLMs cannot be tweaked or tuned to be more than incidentally truthful.