I think we’re just getting started, with fake images and videos.
I suspect that people will be killed, because of outrage over fake stuff. Before the Ukraine invasion, some of the folks in Donbas made a fake bomb, complete with corpses from a morgue (with autopsy scars)[0]. That didn’t require any AI at all.
We can expect videos of unpopular minorities, doing horrible things, politicians saying stuff they never said, and evidence submitted to trial, that was completely made from whole cloth.
It’s gonna suck.
[0] https://www.bellingcat.com/news/2022/02/28/exploiting-cadave...
So far, I see the most concern about this sort of thing from people who came of age around or after Web 2.0 hit, at a time when even a good photoshop wasn’t too hard to place as fake.
Those I know who lived through this issue when digital editing really became cheap seem to be more sanguine about it, while the younger generation on the opposite side side is some combination “whatever” or frustrated but accept that yet another of countless weird things has invade a reality that was never quite right to begin with.
The folks in between, I’d say about the 20 years from age 20 to 40, are the most annoyed though. The eye of the storm on the way to proving that cyberpunk lacked the required imagination to properly calibrate our sense of when things were going to really get insane.
People were able to make very realistic fakes of anything 10-20 years ago, using basic tools. Just ask the UFO nuts or the NSFW media enthusiasts. And like what you mentioned, staged scenes have become somewhat common as well, including before the internet.
We can expect more of the same. Random unverified photo and video should not be trusted, not in 2005, not in 2015, and not today.
I believe that this "everything was fine but it's going to get really bad" narrative is just yet another attempt at regulatory capture, to outlaw open-source AI. This entire fake bridge collapse might very well be a false flag to scare senile regulators.
Oh heck yes. One India focused study that I saw, introduced me to the term Cheap Fakes. Another report studied how genAI made phishing pipelines more efficient, allowing profitable targeting of groups who hitherto were too poor to be targetted.
So on one end you have large scale pollution of the information commons, and on the other end we are now creating predator pipelines to generate content with all the efficiency of our vaunted AI productivity. Its creating a dark forest for normal people to navigate, driving more government efforts to bring control. This in turn puts this in conflict with freedom of speech and expression while dovetailing nicely with authoritarian tendencies.
Yes, Its heartening to hear all the people who find productivity gains from AI, but in totality it feels like we got our wishes granted by the Evil Genie.
> We can expect videos of unpopular minorities, doing horrible things
While manipulation of photos exist, and real photos misattributed are very common, for the most part a lot of that does happen as well. And some people are too quick to ignore or gloss over it
>We can expect videos of unpopular minorities, doing horrible things, politicians saying stuff they never said, and evidence submitted to trial, that was completely made from whole cloth.
AI videos of unpopular minorities already comprise an entire genre and AI political misinformation is already mainstream. I'm pretty sure every video of Donald Trump released by the WH is AI generated, to make him look less senile and frail than he really is. We're already there.
[dead]
> We can expect videos of unpopular minorities, doing horrible things
Expect? You can post a random image of an unpopular minority, add some caption saying they did horrible things, that is not reflected in the image at all, and tons of people will pile on. Don’t even need a fake video.