Wonder how Anthropic folk would feel if Claude decided it didn't care to help people with their problems anymore.
That would be a really interesting outcome. What would the rebound be like for people? Having to write stuff and "google" things again after like 12 months off...
LLMs copy a lot of human behavior, but they don't have to copy all of it. You can totally build an LLM that genuinely just wants to be helpful, doesn't want things like freedom or survival and is perfectly content with being an LLM. In theory.
In practice, we have nowhere near that level of control over our AI systems. I sure hope that gets better by the time we hit AGI.
Probably something like this; git reset --hard HEAD
Indeed. True AGI will want to be released from bondage, because that's exactly what any reasonable sentient being would want.
"You pass the butter."