My comment was more an answer to the proposed gatekeeping of science as a human activity.
Yes, Ai is still not good in the grand scheme of things. But everybody actively using it has gotten concerned over the past 2 months by the leap frigging of LLMs - and surprised as they thought we had arrived at the plateau.
We will see in a year or two if humans still hold an advantage in research - currently very few do in software development, despite what they think about themselves.
> gatekeeping of science as a human activity
The other side of the coin is: automating science as a machine activity.
Is that what we want? I agree with you that the use of language models in science is an inevitable paradigm shift, but now is the time to make collective decisions about how we're going to assimilate this increasingly super-human "intelligence" into academic practices, and the rest of daily life. Otherwise we will be the ones being assimilated by a force beyond our control.
The progress is so rapid that the only people who might have control over the process are the ones with self-interest, mainly financial, and not aligned with - in some aspects opposed to - the interests of humanity.
The single most valuable part of science is keeping the gates: not adding things to the corpus of scientific knowledge unless they can be properly substantiated.