logoalt Hacker News

wyretoday at 6:01 PM7 repliesview on HN

I think it's ability to consume information is one of the scarier aspects of AI. NSA, other government, and multi-national corporations have years of our individual browsing and consumption patterns. What happens when AI is analyzing all of that information exponentially faster than any human code and communicating with relevant parties for their own benefit, to predict or manipulate behavior, build psychological profiles, identify vulnerabilities, etc.

It's incredibly amusing to me reading some people's comments here critical of AI, that if you didn't know any better, might make you think that AI is a worthless technology.


Replies

munificenttoday at 10:02 PM

> if you didn't know any better, might make you think that AI is a worthless technology.

"Worthless" is ambiguous in this sentence. I think people understand that AI isn't useless in that it at least to some degree does the things it is intended to do. At the same time, it might be valueless in that a world without it is preferable to some.

Landmines are not useless, but they are valueless. Opinions differ is to what degree generative AI is like landmines in terms of externalities.

heavyset_gotoday at 9:52 PM

We didn't need generative AI for this, standard ML techniques from 10 years ago were already doing this, and are cheaper.

macNchztoday at 6:42 PM

All hype and thought experiments about superintelligence and open questions about creativity and learning and IP aside, this is the area that gives me the biggest pause.

We've effectively created a panopticon in recent years—there are cameras absolutely everywhere. Despite that, though, the effort to actually do something with all of those feeds has provided a sort of natural barrier to overreach: it'd be effectively impossible to have people constantly watching all of the millions of camera feeds available in a modern city and flagging things, but AI certainly could.

Right now the compute for that is a barrier, but it would surprise me if we don't see cameras (which currently offer a variety of fairly basic computer vision "AI" alerting features for motion and object detection) coming with free-text prompts to trigger alerts. "Alert me if you see a red Nissan drive past the house.", "Alert me if you see a neighbor letting his dog poop in my yard.", "Alert the police if you see crime taking place [default on, opt out required]."

show 1 reply
concindstoday at 8:12 PM

For decades people tried to correlate gait to personality and behavior. Then, DNA, with IQ and all sorts of things. Now they're trying it with barely-noticeable facial features, again with personality traits. But the research is still crap bordering on woo, and barely predictive at all.

It's at least plausible that we are sufficiently complex that, even with tons of NSA and corporate data and extremely sophisticated models, you still wouldn't be able to predict someone's behavior with much accuracy.

show 2 replies
jonahrdtoday at 7:20 PM

this became extremely apparent for me watching Adam Curtis's "Russia 1985-1999: TraumaZone" series. The series documents what it was like to live in the USSR during the fall of communism and (cheekily added) democracy. It was released in Oct 2022, meaning it was written and edited just before the AI curve really hit hard.

But so much of the takeaway is that it's "impossible" for top-down government to actually process all of what was happening within the system they created, and to respond appropriately and timely-- thus creating problems like food shortages, corrupt industries, etc etc. So many of the problems were traced to the monolith information processing buildings owned by the state.

But honestly.. with modern LLMs all the way up the chain? I could envision a system like this working much more smoothly (while still being incredibly invasive and eroding most people's fundamental rights). And without massive food and labour shortages, where would the energy for change come from?

show 2 replies
Liquixtoday at 6:22 PM

> What happens when AI is analyzing all of that information...

They run simulations against N million personality models, accurately predicting the outcome of any news story/event/stimulus. They use this power to shape national and global events to their own ends. This is what privacy and digital sovereignty advocates have been warning the public about for over a decade, to no avail.

show 1 reply
seg_loltoday at 7:03 PM

Jimmy Carr (comedian) https://www.youtube.com/watch?v=jaYOskvlq18 thinks that AIs ability to be a surveillance savant is one of the biggest risks that people aren't thinking enough about.

show 1 reply