logoalt Hacker News

Looks like it is happening

154 pointsby jjgreenyesterday at 9:19 PM103 commentsview on HN

Comments

Chinjutyesterday at 10:31 PM

Note the following comment by Jerry Ling: "The effect goes away if you search properly using the original submission date instead of the most recent submission date. By using most recent submission date, your analysis is biased because we’re so close to the beginning of 2026 so ofc we will see a peak that’s just people who have recently modified their submission."

show 2 replies
sixtyjyesterday at 9:38 PM

Well… it is happening. You can’t put spilled milk back to bottle. You can do future requirements that will try to stop this behaviour.

E.g. in the submission form could be a mandatory field “I hereby confirm that I wrote the paper personally.” In conditions there will be a note that violating this rule can lead to temporary or permanent ban of authors. In the world where research success is measured by points in WOS, this could lead to slow down the rise of LLM-generated papers.

show 2 replies
wmfyesterday at 9:36 PM

I assume hep = high energy physics in this context. PI = professor who received a government grant.

Peer review has never really been blind and I suspect PIs will reject papers from "outsiders" even if they are higher quality. This already happens to some extent today when the stakes are lower.

show 4 replies
dangyesterday at 9:56 PM

> submission numbers in the last couple months have nearly doubled with respect to the stable numbers of previous years

This is showing up (no pun intended) on HN as well. The # of submissions and # of submitters, which traditionally had been surprisingly stable—fluctuating within a fixed range for well over 10 years—has recently been reaching all-time highs. Not double, though...yet.

show 9 replies
general_revealyesterday at 9:56 PM

“And further, by these, my son, be admonished: of making many books there is no end; and much study is a weariness of the flesh.” - Ecclesiastes 12:12 (KJV)

I suppose we’re entering TURBO mode for of ‘making many books there is no end’.

zoogenyyesterday at 9:42 PM

One thing I have been guilty of, even though I am an AI maximalist, is asking the question: "If AI is so good, why don't we see X". Where X might be (in the context of vibe coding) the next redis, nginx, sqlite, or even linux.

But I really have to remember, we are at the leading edge here. Things take time. There is an opening (generation) and a closing (discernment). Perhaps AI will first generate a huge amount of noise and then whittle it down to the useful signal.

If that view is correct, then this is solid evidence of the amplification of possibility. People will decry the increase of noise, perhaps feeling swamped by it. But the next phase will be separating the wheat from the chaff. It is only in that second phase that we will really know the potential impact.

show 4 replies
8organicbitsyesterday at 10:20 PM

> when AI agents started being able to write papers indistinguishable in quality from [...]

Given that arXiv lacks peer review, I'm not clear what quality bar is being referenced here.

pavel_lishinyesterday at 10:10 PM

Apparently "hep-th" stands for "High Energy Physics - Theory".

sealeckyesterday at 9:34 PM

There are many really excellent papers out there - the kind which will save you hours/months of work (or even make things that were previously inviable to build viable).

That said, it is amazing how terrible a lot of papers are; people are pressured to publish and therefore seem to get into weird ruts trying to do what they think will be published, rather than what is intellectually interesting...

show 1 reply
antogninitoday at 5:23 AM

I think the long term impact of this will be to strengthen the importance of social ties in academic publishing. As it is there are so many papers published in many fields that people tend to filter for papers published by big names and major institutions. But the inevitable torrent of AI slop will overwhelm anyone who is looking for any gems coming from outsiders. I suspect the net effect will be to make it even more important that you join a big name institution in order to be taken seriously.

mianosyesterday at 10:50 PM

This title should have been editorialised. It's like a headline from the daily mirror.

hhsueyyesterday at 10:57 PM

What's happening? I hate click bait titles like these.

show 1 reply
AvAn12today at 12:59 AM

The shilling for AI continues. How much $$$ do the big tech companies pay Columbia? Oh yeah, and what exactly did Columbia agree to do to get the trmp admin to leave them alone? All speculation of course, but the circumstantial picture stinks.

tombertyesterday at 10:44 PM

I like AI, I use Codex and ChatGPT like most people are, but I have to say that I am pretty tired of low-effort crap taking over everything, particularly YouTube.

There have always been content mills, but there was still some cost with producing the low-effort "Top 10" or "Iceberg Examination" videos. Now I will turn on a video about any topic, watch it for three minutes, immediately get a kind of uncanny vibe, and then the AI voice will make a pronunciation mistake (e.g. confusing wind, like the weather effect or the winding of a spring), or the script starts getting redundant or repetitive in ways that are common with AI.

And I suspect these kinds of videos will become more common as time goes on. The cost to producing these videos is getting close to "free" meaning that it doesn't take much to make a profit on them, even if their views are relatively low per-video.

If AI has taught me anything, it's that there still is no substitute for effort. I'm sure AI is used in plenty of places where I don't notice it, because the people who used it still put in effort to make a good product. There are people who don't just make a prompt like "make me a fifteen minute video about Chris Chan" and "generate me a thumbnail with Chris Chan with the caption 'he's gone too far'", and instead will use AI as a tool to make something neat.

Genuine effort is hard, and rare, and these AI videos can give the facsimile of something that prior to 2023 was high effort. I hate it.

gtirlonitoday at 1:49 AM

Who's spending money to write bots to comment on obscure (to me) websites and why?

show 1 reply
bitbytebaneyesterday at 10:43 PM

STOP CITING YOUTUBERS AS A CREDIBLE SOURCE OF ANYTHING.

show 1 reply
hmokiguessyesterday at 10:00 PM

I think this is solid proof that the bedrock of academia is deeply motivated by money and still defaults to optimizing where it impacts its bottom line. If professors can get more grants and more publications in less time with less spending, of course they are going to be doing that. This isn't just because of AI, but also because of how this system is designed in the first place.

show 2 replies
sidrag22yesterday at 9:49 PM

Noise is going to be the coming years biggest issue for so many fields. A losing battle like arguing with a conspiracy minded relative, you can slowly and clearly address one conspiracy and disprove it, by the time you do, they are deep into 8 new ones.

guerrillayesterday at 10:07 PM

Website's down. What was it about?

NooneAtAll3today at 12:05 AM

Clickbait title

what would be a better one?

tempodoxtoday at 1:42 AM

Convenience dictates that we will be drowning in slop as long as convenience lets us rank academics by number of publications. Publish or perish?

MoonWalktoday at 12:39 AM

What is?

mclau153yesterday at 9:40 PM

What is happening?

show 5 replies
lloydatkinsonyesterday at 11:23 PM

Isn't there a rule about vague titles like this?

selridgeyesterday at 9:42 PM

Honestly, this is good. We were already in a completely unsustainable system. Nobody had an alternative. We still don’t have one but at least now it’s not just merely unsustainable— it is completely fucked in half.

This kind of pattern is gonna get repeated in a lot of sectors when previous practices that were merely unsustainable become unsustained.

show 2 replies
ModernMechyesterday at 10:14 PM

I mean... I dunno I wish the AI could write my papers. I ask it to and it's just bad. The research models return research that doesn't look anything like the research I do on my own -- half of it is wrong, the rest is shallow, and it's hardly comprehensive despite having access to everything (it will fail to find things unless you specifically prompt for them, and even then if the signal is too low it'll be wrong about it). So I can't even trust it to do something as simple as a literature review.

Insofar as most research is awful, it's true that the AI is producing research that looks and sounds like most of it out there today. But common-case research is not what propels society forward. If we try to automate research with the mediocrity machine, we'll just get mediocre research.

seg_loltoday at 1:58 AM

If someone mentions Sabine Hossenfelder and it isn't to expose her as a rage-bait intellectual dark web grifter, then it puts that person in a suspect light.

hxbdgyesterday at 10:17 PM

[dead]