It is interesting that most of our modes of interaction with AI is still just textboxes. The only big UX change in that the last three years has been the introduction of the Claude Code / OpenAI Codex tools. They feel amazing to use, like you're working with another independent mind.
I am curious what the user interfaces of AI in the future will be, I think whoever can crack that will create immense value.
How many trillions of dollars have we spent on these things?
Would we not expect similar levels of progress in other industries given such massive investment?
> Again, we have moved past hallucinations and errors to more subtle, and often human-like, concerns.
From my experience we just get both. The constant risk of some catastrophic hallucination buried in the output, in addition to more subtle, and pervasive, concerns. I haven't tried with Gemini 3 but when I prompted Claude to write a 20 page short story it couldn't even keep basic chronology and characters straight. I wonder if the 14 page research paper would stand up to scrutiny.
Google's advancement is not just in software, it is also in hardware. They use their own hardware for training as well as inferencing [1].
[1] https://finance.yahoo.com/news/alphabet-just-blew-past-expec...
I find Gemini 3 to be really good. I'm impressed. However, the responses still seem to be bounded by the existing literature and data. If asked to come up with new ideas to improve on existing results for some math problems, it tends to recite known results only. Maybe I didn't challenge it enough or present problems that have scope for new ideas?
Really nitpicky I know but GPT-3 was June 2020. ChatGPT was 3.5 and the author even gets that right in an image caption. That doesn’t make it any more or less impressive though.
For Caude Code, Antigrav, etc, do people really just let an LLM loose on their own personal system?
I feel like these should run in a cloud enviroment, or at least on some specific machine where I don't care what it does.
> But it suggests that “human in the loop” is evolving from “human who fixes AI mistakes” to “human who directs AI work.” And that may be the biggest change since the release of ChatGPT.
I feel like I've been hearing this for at least 1.5 years at this point (since the launch of GPT 4/Claude 3). I certainly agree we've been heading in this direction but when will this become unambiguously true rather than a phrase people say?
> So is this a PhD-level intelligence? In some ways, yes, if you define a PhD level intelligence as doing the work of a competent grad student at a research university. But it also had some of the weaknesses of a grad student.
As a current graduate student, I have seen similar comments in academia. My colleagues agree that a conversation with these recent models feels like chatting with an expert in their subfields. I don't know if it represents research as a field would not be immune to advances in AI tech. I still hope this world values natural intelligence and having the drive to do things heavily than a robot brute-forcing into saying "right" things.
for whatever reason gemini 3 is the first ai i have used for intelligence rather than skills. I suspect a lot more will follow, but its a major threshold to be broken.
i used gpt/claude a ton for writing code, extracting knowledge from docs, formatting graphs and tables ect.
but gemini 3 crossed threshold where conversations about topics i was exploring or product design were actually useful. Instead of me asking 'what design pattern should be useful here', or something like that it introduces concepts to the conversation, thats a new capability and a step function improvement.
I recently (last week) used Nano Banana Pro3 for some specific image generation. It was leagues ahead of 2.5. Today I used it to refine a very hard-to-write email. It made some really good suggestions. I did not take its email text verbatim. Instead I used the text and suggestions to improve my own email. Did a few drafts with Gemini3 critiqueing them. Very useful feedback. My final submission about "..evaluate this email..." got Gemini3 to say something like "This is 9.5/10". I sorta pride myself on my writing skills, but must admit that my final version was much better than my first. Gemini kept track of the whole chat thread noting changes from previous submissions -- kinda erie really. Total time maybe 15 minutes. Do I think Gemini will write all my emails verbatim copy/paste... No. Does Gemini make me (already a pretty good writer) much better. Absolutely. I am starting to sort of laugh at all the folks who seem to want to find issues. Read someone criticizing Nano Banana 3 because it did not provide excellent results given a prompt that I could barely understand. Folks that criticize Gemini3 because they cannot copy/paste results. Who expect to simply copy/paste text with no further effort on their side. Myself, I find these tools pretty damn impressive. I need to ensure I provide good image prompts. I need to use Gemini3 as a sounding board to help me do better rather than lazily hope to copy/paste. My experience... Thanks Google. Thanks OpenAI (I also use ChatGPT similarly -- just for text). HTH, NSC
I have Gemini Pro included on my Google Workspace accounts, however, I find the responses by ChatGPT, more "natural", or maybe even more in line with what I want the response to be. Maybe it is only me.
First, the fact we have moved this far with LLMs is incredible.
Second, I think the PhD paper example is a disingenuous example of capability. It's a cherry-picked iteration on a crude analysis of some papers that have done the work already with no peer-review. I can hear "but it developed novel metrics", etc. comments: no, it took patterns from its training data and applied the pattern to the prompt data without peer-review.
I think the fact the author had to prompt it with "make it better" is a failure of these LLMs, not a success, in that it has no actual understanding of what it takes to make a genuinely good paper. It's cargo-cult behavior: rolling a magic 8 ball until we are satisfied with the answer. That's not good practice, it's wishful thinking. This application of LLMs to research papers is causing a massive mess in the academic world because, unsurprisingly, the AI-practitioners have no-risk high-reward for uncorrected behavior:
- https://www.nytimes.com/2025/08/04/science/04hs-science-pape...
- https://www.nytimes.com/2025/11/04/science/letters-to-the-ed...
Sinusoidal, not the singularity.
Yeah, well, that’s also what an asymptotic function looks like.
This article is a wishlist, not something grounded in reality.
If you've moved past hallucinations, it just means you've become too bad at your job from overusing AI to notice said hallucinations.
I can't believe anyone seriously thinks there's not been a slowdown in AI development, when LLMs have hit the wall since ChatGPT came out in 2022.
Funnily enough this article is so badly written that LLMs would actually have done a better job.
[dead]
[dead]
Every time I see an article like this, it's always missing --- but is it any good, is it correct? They always show you the part that is impressive - "it walked the tricky tightrope of figuring out what might be an interesting topic and how to execute it with the data it had - one of the hardest things to teach."
Then it goes on, "After a couple of vague commands (“build it out more, make it better”) I got a 14 page paper." I hear..."I got 14 pages of words". But is it a good paper, that another PhD would think is good? Is it even coherent?
When I see the code these systems generate within a complex system, I think okay, well that's kinda close, but this is wrong and this is a security problem, etc etc. But because I'm not a PhD in these subjects, am I supposed to think, "Well of course the 14 pages on a topic I'm not an expert in are good"?
It just doesn't add up... Things I understand, it looks good at first, but isn't shippable. Things I don't understand must be great?