logoalt Hacker News

NitpickLawyeryesterday at 7:49 PM1 replyview on HN

There are some good things here:

First, we currently have 4 frontier labs, and a bunch of 2nd tier ones following. The fact that we don't have just oAI or just Anthropic or just Google is good in the general sense, I would say. The 4 labs racing each other and trading SotA status for ~a few weeks is good for the end consumer. They keep each other honest and keep the prices down. Imagine if Anthropic could charge 60$ /MTok or oAI could charge 120$ /MTok for their gpt4 style models. They can't in good part because of the competition.

Second, there's a bunch of labs / companies that have released and are continuing to release open mdoels. That's as close to "intelligence on tap" as you can get. And those models are ~6-12 months behind the SotA models, depending on your usecase. Even though the labs have largely different incentives to do so, a lot of them are still releasing open models. Hopefully that continues to hold. So not all control will be in the hands of big tech, even if the "best" will still be theirs. At some point "good enough" is fine.

There's also the thing about geopolitics being involved in this. So far we've seen the EU jumping the gun on regulation, and we're kinda sorta paying for it. Everyone is still confused about what can or cannot be done in the EU. The US seems to be waiting to see what happens, and China will do whatever they do. The worst thing that can happen is that at some point the big players (Anthropic is the main driver) push for regulatory capture. That would really suck. Thankfully atm there's this lingering thinking that "if we do it, the others won't so we'll be on the back foot". Hopefully this holds, at least until the "good enough" from above is out :)


Replies

almostdeadguyyesterday at 9:01 PM

I'm not just concerned about control by one company, I'm concerned by control for the profit motive, and probably concerned about the wisdom of using these things for anything except extremely limited use cases (breakthrough scientific research, etc.). I think tech people have a bad tendency of viewing this through the lens of platform wars type stakes, and there are much bigger problems with AI. The fact that an alarming number of ex-and-current Anthropic people I've met think the world is going to end is something we should take heed of!

The AI labs started down this path using the Manhattan Project as a metaphor and guess what? It's a good metaphor and we should embrace most of the wider implications of that (though I'd love to avoid all the MAD/cold war bullshit this time).