Not in this review: Also the record year in intelligent systems aiding in and prompting human users into fatal self-harm.
Will 2026 fare better?
Also essential self-fulfilment.
But that one doesn't make headlines ;)
The people working on this stuff have convinced themselves they're on a religious quest so it's not going to get better: https://x.com/RobertFreundLaw/status/2006111090539687956
[dead]
I really hope so.
The big labs are (mostly) investing a lot of resources into reducing the chance their models will trigger self-harm and AI psychosis and suchlike. See the GPT-4o retirement (and resulting backlash) for an example of that.
But the number of users is exploding too. If they make things 5x less likely to happen but sign up 10x more people it won't be good on that front.