Every day that passes I grow fonder of Google's decision to delay or otherwise keep a lot of this under the wraps.
The other day I was scrolling down on YouTube shorts and a couple videos invoked an uncanny valley response from me (I think it was a clip of an unrealistically large snake covering some hut) which was somehow fascinating and strange and captivating, and then scrolling down a few more, again I saw something kind of "unbelievable"... I saw a comment or two saying it's fake, and upon closer inspection: yeah, there were enough AI'esque artifacts that one could confidently conclude it's fake.
We'd known about AI slop permeating Facebook -- usually a Jesus figure made out of unlikely set of things (like shrimp!) and we'd known that it grips eyeballs. And I don't even know in which box to categorize this, in my mind it conjures the image of those people on slot machines, mechanically and soullessly pulling levers because they are addicted. It's just so strange.
I can imagine now some of the conversations that might have happened at Google when they choose to keep a lot of innovations related to genAI under the wraps (I'm being charitable here of their motives), and I can't help but agree.
And I can't help but be saddened about OpenAI's decisions to unload a lot of this before recognizing the results of unleashing this to humanity, because I'm almost certain it'll be used more for bad things than good things, I'm certain its application on bad things will secure more eyeballs than on good things.
On the other hand, because these tools like this are being made available before output is perfected, you and many others are being trained in AI discernment; being able to detect fake things will be a helpful skill to have for some time: another form of critical thinking.
It would be FAR worse if a privately held advanced AI's outputs were unleashed without the population being at least somewhat cautious of everything. The real danger imho comes from private silos of advanced general intelligence that aren't shared and used to gain power, control, and money.
It saddens me. Innovations in AI 'art' generation (music, audio, photo) have been a net negative to society and are already actively harming the Internet and our media sphere.
Like I said in another comment, LLMs are cool and useful, but who in the hell asked for AI art? It's good enough to fool people and break the fragile trust relationship we had with online content, but is also extremely shit and carries no meaning or depth whatsoever.
They should have kept this amazing tech under the wraps because you have a bad feeling about it? Hate to break it to you, but there have been fake videos on the internet ever since it has existed. There are more ways to fake videos than GenAI. If you haven't been consuming everything on the internet with a high alert bs sensor, then that's an issue of its own. You shouldn't trust things on the internet anyway unless there is overwhelming evidence.
Too charitable indeed. Google was simply unprepared and has inferior alternatives.
My prediction is that next year they will catch up a bit and will not be shy about releasing new technology. They will remain behind in LLMs but at least will more deeply envelope their own existing products, thus creating a narrative of improved innovation and profit potential. They will publicly acknowledge perceived risks and say they have teams ensuring it will be okay.
I wish Google would allow me to remove the AI stuff from search results.
99% of the times it's either useless or wrong.
This is all inevitable. At worst it's pulling the issues forward by a few months or years, and I don't think anyone will meaningfully address the problem until it's staring us in the face.
I believe the internet needs a distributed trust and reputation layer. I haven't fully thought through all the details, but:
- Some way to subscribe to fact checking providers of your choice.
- Some way to tie individuals' reputation to the things they post.
- Overlay those trust and reputation layers.
I want to see a score for every webpage, and be able to drill into what factored into that score, and any additional context people have provided (e.x. Community Notes).
There's a huge bootstrapping and incentive problem though. I think all the big players would need to work together to build this. Social media, legacy media companies, browsers, etc.
This also presupposes people actually care about the truth, which unfortunately doesn't always seem like the case.
I don't think Google delayed or kept this under wraps for any noble reasons. I think they were just disorganized as evidenced by their recent scrambling to compete in this space.
I don't even know if this will be possible, or how it would work, but it seems like the next iteration of social media will be based on some verification that the user is not using AI or is a bot. Currently they are all incentivized to not stop bot activity because it increases user counts, ad revenue, etc.
Maybe the model is you have to pay per account to use it, or maybe the model will be something else.
I doubt this will make everyone just go back to primarily communicating in person/via voice servers but that is a possibility.
exactly one lab has passed the test of morals vs profit at this point, and thats deepmind, and they were thoroughly punished for it.
Every value oAI has claimed to have hasn't lasted a milisecond longer than there was profit motive to break it, and even anthropic is doing military tech now.
> the image of those people on slot machines, mechanically and soullessly pulling levers because they are addicted. It's just so strange.
Worse, the audience is our parents and grandparents. They have little context to be able to sort out reality from this stuff
Shorts are designed to trade your valuable attention for trite, low-effort content. Most decent shorts are just clips of longer-form content.
Do yourself a favor and avoid that kind of content, opting instead for long-form consumption. The discovery patterns are different, but you're less inclined to encounter fake content if you develop a trust network of good channels.
The way AI goes, it will actually raise the cost of valid services: cost of bullshit and spam is going down, which will raise the cost of valid, non-ai powered services to raise above the noise or be able to filter it out. There is only negative value to what "open"-ai is adding to the world right now. By playing the long-term AI safety card, of the hypothetical scenario some AI supposedly getting conscious in the future, they try to pass themselves clean and innocent in all the damage they cause to society.
I just hope the online, social media space gets enshitified to an such a degree that it stops playing a major role in society, though sadly that is not how things usually seem to work.
On the other hand by making public what technologies capabilities are - doesn't it stop the problem of people having this tech in secret and using it before anybody is aware it's even possible?
ie a company developing this tech, keeping under wraps and say only using for special government programmes....
Pandora’s box is open, not releasing models and tools is just going to result in someone else doing it.
They didn’t keep it under wraps, it’s just the team considered the paper shipping not the product. They still shipped the papers that decentralized the knowledge.
Could even argue shipping the product and not the paper would have done more for AI safety, least it would be controlled.
The best part is that eventually, over time, the AI slop will feed into training data more and more. I suspect it will be like the Kessler Syndrome of AI models.
The ability to make strange videos as a consumer... it's not inherently good or bad, it'll just be... weird
It doesn't take AI to fool people. They have been propagandised and lied to on a major scale since mass media.
They also lie themselves: they cannot detect overt bias or reflect on themselves and be aware of their hidden motives, resentments and wishful thinking. Including me and you.
Most people hold important beliefs about the world that are comically inaccurate.
AI changes absolutely nothing how many true or false beliefs the average Joe holds.
> And I can't help but be saddened about OpenAI's decisions to unload a lot of this before recognizing the results of unleashing this to humanity
Yeah, and it's especially hypocrite coming from them who said they'd refuse to disclose anything about GPT-3 because they said it was dangerous. And then a few years latter: “Hey remember about this thing we told you it was too dangerous before? Now we have a monetization strategy so we're giving access to everyone, today.”
> there were enough AI'esque artifacts that one could confidently conclude it's fake.
And yet, you would not have known how to recognize those artifacts without "OpenAI's decisions to unload a lot of this before recognizing the results of unleashing this to humanity".
You could have said the same thing about photo shop... Some people will learn to spot BS and think critically even if they can't quite put their finger on it and the video is very good (What, Trump fought a T-Rex, AND WON?), some people could be fooled by anything, and there is a lot in between.
[dead]
Considering google image search is polluted by AI-generated images at this moment, perhaps google is afraid of making the search even worse?
I saw my first AI video that completely fooled commenters: https://imgur.com/a/cbjVKMU
This was not marked as AI-generated and commenters were in awe at this fuzzy train, missing the "AIGC" signs.
I'm quite nervous for the future.