I see this fallacy being committed a lot these days. "Because LLMs, you will no longer need a skill you don't need any more, but which you used to need, and handwaves that's bad".
Academia doesn't want to produce astrophysics (or any field) scientists just so the people who became scientists can feel warm and fuzzy inside when looking at the stars, it wants to produce scientists who can produce useful results. Bob produced a useful result with the help of an agent, and learned how to do that, so Bob had, for all intents and purposes, the exact same output as Alice.
Well, unless you're saying that astrophysics as a field literally does not matter at all, no matter what results it produces, in which case, why are we bothering with it at all?
> why are we bothering with it at all?
Because we largely want people who have committed to tens of thousands of dollars of debt to feel sufficiently warm and fuzzy enough to promote the experience so that the business model doesn’t collapse.
It’s difficult to think anyone would end up truly regretting doing a course in astrophysics, or any of the liberal arts and sciences if they have a modicum of passion, but it’s very believable that a majority of them won’t go on to have a career in it, whatever it is, directly.
They’re probably more likely to gain employment on their data science skills, or whether core competencies they honed, or just the fact that they’ve proven they can learn highly abstract concepts, or whatever their field generalises to.
Most of the jobs are in not-highly-specific academic-outcome.
> Take away the agent, and Bob is still a first-year student who hasn't started yet. The year happened around him but not inside him. He shipped a product, but he didn't learn a trade.
We're minting an entire generation of people completely dependent on VC funding. What happens if/when the AI companies fail to find a path to profitability and the VC funding dries up?
I was reading in the article that what matters is the process that leads to the (typically useless) result, what the people get out of it.
Once I realized that this white on black contrast was hurting my eyes, I decided to stop as I didn't want to see stripes for too long when looking away.
Some activity has outcomes that aren't strictly in the results.
The arguments of the LLM psychosis afflicted get more and more desperate. Astrophysics is about understanding and thinking, this comment paints it as result oriented (whatever that means).
The industrialization of academia hasn't even produced more results, it has produced more meaningless papers. Just like LLMs produce the 10.000th note taking app, which for the LLM psychosis afflicted is apparently enough.
Why should we only do things that produce some sort of value? Do we really want to reduce all of human existence to increasing profits?
Hard sciences play this crucial and often unseen role in our society : they help train humans to develop critical thinking. Not everyone with PhD in Astrophysics ends up doing Astrophysics in life; it's a discipline, or a training regime for our minds. After that PhD; the result is a human being who can tackle hard problems. We have many other such disciplines (basically any PhD in hard sciences) which produces this outcome.
Until the LLM is wrong and Bob passes the erroneous result off as accurate, reliable and vetted by a knowledgeable person. At that point Bob is not producing a useful result. Then it becomes a trap other people might get caught in, wasting valuable time and energy.
The goal of academic research is to create understanding, not papers. If we outsource all research to LLMs, then we are only producing the latter.
You missed the argument. When we are talking about faculty, yes their result is the only thing that matters, so if it was produced quicker with a LLM, that's great. But when we are talking about the student, there is a drastic difference in the student in the with LLM vs without LLM cases. In the latter they have much better understanding. And that matters in the system when we are educating future physicists.
Is that what "academia" wants? Last I checked "academia" is not a dude I can call and ask for an opinion or definition of what it was interested in.
I will make an explicit, plausible, counterpoint: academia wants to produce understanding. This is, more or less, by definition, not possible with an AI directly (obviously AIs can be useful in the process).
Take GR as an example. The vast majority of the dynamical character of the theory is inaccessible to human beings. We study it because we wanted to understand it, and only secondarily because we had a concrete "result" we were trying to "achieve."
A person who cares only about results and not about understanding is barely a person, in my opinion.
Completely missed the point of the blog post which was that the point was producing the scientist not the result
We aren't talking pocket calculators here (I see the irony of phone app in pocket), LLMs are hugely expensive things made and controlled behind costly commercial subscriptions. And likely in the middle of a huge investment bubble and stability is uncertain. So we all need to be careful about "gee we don't need that skill or person anymore", etc.
The problem is that LLMs stop working after a certain point of complexity or specificity, which is very obvious once you try to use it in a field you have deep understanding of. At this point, your own skills should be able to carry you forward, but if you've been using an LLM to do things for you since the start, you won't have the necessary skills.
Once they have to solve a novel problem that was not already solved for all intentes and purposes, Alice will be able to apply her skillset to that, whereas Bob will just run into a wall when the LLM starts producing garbage.
It seems to me that "high-skill human" > "LLM" > "low-skill human", the trap is that people with low levels of skills will see a fast improvement of their output, at the hidden cost of that slow build-up of skills that has a way higher ceiling.