This person doesn't understand how LLMs work.
Not sure how you could read this essay and come to that conclusion. It definitely aligns with my own understanding, and his conclusions seem pretty reasonable (though the AI 2027/Situational Awareness part might be arguable)
Care to be more specific?
Not sure how you could read this essay and come to that conclusion. It definitely aligns with my own understanding, and his conclusions seem pretty reasonable (though the AI 2027/Situational Awareness part might be arguable)