logoalt Hacker News

Tech Titans Amass Multimillion-Dollar War Chests to Fight AI Regulation

175 pointsby thmtoday at 9:21 AM170 commentsview on HN

https://archive.is/j1XTl


Comments

Deegytoday at 2:35 PM

They know that LLMs as a product are racing towards commoditization. Bye bye profit margins. The only way to win is regulation allowing a few approved providers.

show 7 replies
neilvtoday at 4:47 PM

Only "multi-million"?

Someone once told me about being a new journalist, on the city beat. They said something like: I wasn't surprised to find that bribery was going on; I was just surprised the bribes were so small.

show 1 reply
throwaway48476today at 2:26 PM

AI regulation should wait until after the crash. That way AI can be regulated for what it does and not the fever dream pushed by marketers.

show 1 reply
cmiles8today at 12:35 PM

They need to be more worried about creating a viable economic model for the present AI craze. Right now there’s no clear path to making any of the present insanity a profitable endeavor. Yes NVIDIA is killing it, but with money pumped in from highly upside down sources.

Things will regulate themselves pretty quickly when the financial music stops.

show 5 replies
jmward01today at 5:19 PM

I believe that the right regulation makes a difference, but I honestly don't know what that looks like for AI. LLMs are so easy to build/use and that trend is accelerating. The idea of regulating AI is quickly becoming like the idea of regulating hammers. They are ubiquitous general purpose tools and putting legislation specifically about hammers would be deeply problematic for, hopefully, obvious reasons. Honest question here, what is practical AND effective here? Specifically, what problems can clearly be solved and by what kinds of regulations?

show 1 reply
yonatrontoday at 4:13 PM

"multi-million dollar warchests" Millions, lol. Wrong era, Dr. Evil!

TheAceOfHeartstoday at 10:25 AM

Archive: https://archive.is/j1XTl

I cannot help but feel that discussing this topic under the blanket term "AI Regulation" is a bit deceptive. I've noticed that whenever this topic comes up, almost every major figure remains rather vague on the details. Who are some influential figures actually advancing clearly defined regulations or key ideas for approaching how we should think about AI regulation?

What we should be doing is surfacing well defined points regarding AI regulation and discussing them, instead of fighting proxy wars for opaque groups with infinite money. It feels like we're at the point where nobody is even pretending like people's opinions on this topic are relevant, it's just a matter of pumping enough money and flooding the zone.

Personally, I still remain very uncertain about the topic; I don't have well-defined or clearly actionable ideas. But I'd love to hear what regulations or mental models other HN readers are using to navigate and think about this topic. Sam Altman and Elon Musk have both mentioned vague ideas of how AI is somehow going to magically result in UBI and a magical communist utopia, but nobody has ever pressed them for details. If they really believe this then they could make some more significant legally binding commitments, right? Notice how nobody ever asks: who is going to own the models, robots, and data centers in this UBI paradise? It feels a lot like Underpants Gnomes: (1) Build AGI, (2) ???, (3) Communist Utopia and UBI.

show 6 replies
Animatstoday at 6:29 PM

Are we seeing lobbying for liability exemptions for AI errors? That's probably the biggest practical concern on the consumer side.

nis0stoday at 11:57 AM

We don’t know what kind of insecure systems we’re dealing with here, and there’s a pervasive problem of incestuous dependencies in a lot of AI tech stacks, which might lead to some instability or security risks. Adversarial attacks against LLMs are just too easy. It makes sense to let states experiment and find out what works and doesn’t, both as a social experiment and technological one.

bawolfftoday at 6:14 PM

The headline says that as if a few million is a lot of money to spend on lobbying.

bohtoday at 3:05 PM

They're also fighting for regulation to keep the competition at bay.

_spduchamptoday at 7:32 PM

Oh wow! I think I found part of the problem. I replied earlier about algorithmic accountability, and the need for Algorithm Impact Assessments, and I got a snarky reply and down voted like I've never seen before. I guess accountability hits a nerve.

So I'll just say...

Algorithm Impact Assessments

Algorithm Impact Assessments

Algorithm Impact Assessments

show 1 reply
glitchctoday at 4:07 PM

Every generation has its own copyright war. First file sharing, then blockchains, now LLMs. As long as digital comouters are copy-on-write, this debate shall continue. It will only get solved once we have viable quantum computers.

skywhoppertoday at 3:08 PM

I like how building up millions of dollars to bribe elected officials is reported on in such neutral terms.

show 2 replies
jeffbeetoday at 2:30 PM

Is that a lot?

stego-techtoday at 2:10 PM

This is why I’ve refused to buy into the argument from these ghouls that AI would make the world a better place, and their occasional lip-service of requesting AI regulation for “human safety”: their own actions paint a dystopian world of mass surveillance, even heavier labor exploitation, the return of company scrip and stores, and the wholesale neglect of human well-being, all while blocking the very regulation they claim to want and/or need to succeed safely.

If these people genuinely believed in the good of AI, they wouldn’t be blocking meaningful regulation of it.

https://green.spacedino.net/ai-will-never-create-utopia/

show 1 reply
SilverElfintoday at 8:21 PM

Fighting regulation and taxation of the big tech megacorps is why the leaders and big investors / owners of these companies are looking the other way and being friendly with an administration that is increasingly supremacist and now even discussing denaturalization of legal immigrants (taking away citizenship). It’s despicable cowardice.

AmbroseBiercetoday at 2:06 PM

Oh they aren't conspiring against democratically made decisions about AI, instead they are "ammassing war chests to fight AI regulation", how submissively worded, but that's expected when they have a grip on all mayor communication channels.

jmyeettoday at 4:53 PM

As a reminder, Princeton did a study and found that public support for a bill has almost zero impact on whether or not that bill passes [1].

This government is bought and paid for by the ultra-wealthy and large corporations with a system of legalized corruption that began long before the disastrous Citizens United decision [2] but that decision put things into overdrive.

When this bubble bursts as I firmly believe it will, it's going to get much worse because Congress will launch into action... by bailing out those that invested billions into AI without any prospect of ever recouping that money.

[1]: https://act.represent.us/sign/the-problem-tmp

[2]: https://www.brennancenter.org/our-work/research-reports/citi...

dist-epochtoday at 10:09 AM

But are they really the ones in control?

It's not the tech titans, it's Capitalism itself building the war chest to ensure it's embodiment and transfer into its next host - machines.

We are just it's temporary vehicles.

> “This is because what appears to humanity as the history of capitalism is an invasion from the future by an artificial intelligent space that must assemble itself entirely from its enemy's resources.”

show 2 replies
mertleeetoday at 5:39 PM

[dead]

metalmantoday at 10:32 AM

[flagged]

conartist6today at 9:56 AM

God forbid we protect people from the theft machine

show 3 replies
bilsbietoday at 2:23 PM

The government is far more dangerous than anything that you want it to regulate.

show 2 replies
ramblenodetoday at 4:27 PM

Here is my (hot take) proposal for regulation:

1) *All major players open source their unobfuscated training data.*

a) The evidence so far shows that every major AI company engaged in intentional and historically unprecedented copyright violation to obtain their training data.

b) LLMs have now poisoined future data for any new players. This is a massive negative externality, and we shouldn't accept this externality as a moat locking out future players from competition.

2) *Levy a 20% royalty on all future genAI revenue to authors and artists who appear in the dataset and exempt genAI from future copywright violations.*

a) The current copyright model is bad for both authors and AI companies. It's hard for authors to collect from violations, and it's expensive and tedious for AI companies to comply with innumerable individual copyrights. Simplify the regime for everyone, and properly reward the people whose work is the foundation of these models.

b) The specifics can be worked out, but, among other things, the royalty should be proprotional to the token count of a work, not just number of works.

twodavetoday at 2:05 PM

What is so novel about LLMs (I assume this is the form of AI being discussed) that they require regulation? It’s a dataset, an algorithm and some UI. Almost all the problems brought on by the scale-up are just supply/demand type things. Every problem people point at AI are also problems on some scale with computer software in general, so I’m wary of any regulation (and don’t kid yourself thinking it would be for the people) bleeding over.

show 2 replies