logoalt Hacker News

Arisaka1yesterday at 11:13 AM4 repliesview on HN

My counterpoint to this is, if someone cannot verify the validity of the summary then is it truly a summary? And what would the end result be if the vast majority of people opted to adopt or deny a position based on the summary written by a third party?

This isn't strictly a case against AI, just a case that we have a contradiction on the definition of "well informed". We value over-consumption, to the point where we see learning 3 things in 5 minutes as better than learning 1 thing in 5 minutes, even if that means being fully unable to defend or counterpoint what we just read.

I'm speficially referring to what you said: "the speaker used some obscure technical terminology I didn't know" this is due to lack of assumed background knowledge, which makes it hard to verify a summary on your own.


Replies

impendiatoday at 1:52 AM

If I needed something verifiable, or wanted to learn the material in any depth, I would certainly not rely on an AI summary. However, the summary contained links to source material by known experts, and I would cheerfully rely on those.

The same is true if I imagined there would be misleading bullshit out there. In this case, it's hard to imagine that any nonexpert would bother writing about the topic. ("Universal torsor method" in case you're curious.)

I skimmed the AI summary in ten seconds, gained a rough idea of what the speaker was referring to, and then went back to following the lecture.

auntienomentoday at 12:25 AM

A lot of the time, the definitions peculiar to a subfield of science _don't_ require much or any additional technical background to understand. They're just abbreviations for special cases that frequently occur in the subfield.

Looking this sort of thing up on the fly in lecture is a great use for LLMs. You'll lose track of the lecture if you go off to find the definition in a reference text. And you can check your understanding against the material discussed in the lecture.

gosub100yesterday at 12:43 PM

At least with pre-AI search, the info is provided with a source. So there is a small level of reputation that can be considered. With AI, it's a black box that someone decides what to train it on, and as someone said elsewhere, there's no way to police its sources. To get the best results, you have to turn it loose on everything.

So someone who wants a war or wants Tweedledum to get more votes than Tweedledee has incentives to poison the well and disseminate fake content that makes it into the training set. Then there's a whole department of "safety" that has to manually untrain it to not be politically incorrect, racist etc. Because the whole thesis is don't think for yourself, let the AI think for you.

lazideyesterday at 11:31 AM

The issue is even deeper - the 1 thing in 5 minutes was probably already surface knowledge. We don’t usually really ‘know’ the thing that quickly. But we might have a chance.

The 3 things in 5 minutes is even worse - it’s like taking Google Maps everywhere without even thinking about how to get from point A to point B - the odds of knowing anything at all from that are near zero.

And since it summarizes the original content, it’s an even bigger issue - we never even have contact with the thing we’re putatively learning from, so it’s even harder to tell bullshit from reality.

It’s like we never even drove the directions Google Maps was giving us.

We’re going to end up with a huge number of extremely disconnected and useless people, who all absolutely insist they know things and can do stuff. :s