This sounds misguided. In the little experience I had, I've seen that models get basic knowledge so absolutely wrong that giving them any sort of independence will not result in publications that positively impact a professor's reputation, or contribute to science. Or at least the reviews and papers I read that had AI content did not give me the impression that we should have more of this. And they require much more supervision, with the added issue that they cannot learn in the long term through your interactions, and without the enjoyment of teaching something to someone. They're really good at finding papers though. Perhaps because navigating search engines has become a pain. Perhaps this will be the case in the future, but saying you're tempted right now is like saying you're being tempted to replace your HPC with quantum computers. It's a bit early.
Also 90% of citations generated by AI are wrong or straight up don’t even exist. It’s got such a long way to go to be able to reliably write credible papers.
[Source: https://www.reddit.com/r/AskReddit/comments/o6hlry/statistic... ]
Upon reading this:
> The issue is not whether my students are valuable. In the long run, they are invaluable. The issue is that their value emerges slowly, whereas AI delivers immediate returns.
I had the thought that it's more like hiring only autistic/on-the-spectrum employees that will on whims do exactly what their interpretation was, or possibly worse literally what you said without considering further consequences.