>Do the whole "too dangerous to release" shtick.
One aspect that isn't really discussed much in this context is how to wrap one's head around the corporate risk with models of ever increasing capability. It might not be too dangerous to society, but it could be too dangerous to Anthropic.
There was a view a few years ago that part of why google was well behind the upstart AI labs was their risk aversion that comes with being a giant ad business.
Now that the upstarts are trillion dollar companies, I can see the calculus changing. I think the “too dangerous” stuff is pure hype, but it’s interesting to consider how these companies risk tolerance might have changed vs when they were comparatively small startups