Dude. I just asked my computer to write [ad lib basic utility script] and it spit out a syntactically correct C program that does it with instructions for compiling it.
And then I asked it for [ad lib cocktail request] and got back thorough instructions.
We did that with sand. That we got from the ground. And taught it to talk. And write C programs.
Never mind what? That I had to ask twice? Or five times?
What maximum number of requests do you feel like the talking sand needs to adequately answer your question in before you are impressed by the talking sand?
Crows and parrots are amazing talkers too, but there's a hard limit to how much sense they make. Do you want those birds to teach your kids and serve you medicine?
I don't think it has anything to do with being impressed or not. It's about being careful not to put too much trust in something so fallible. Because it is so amazing, people overestimate where it can be reliably used.
First off all, I appreciate your comment. Yes, it‘s fucking amazing. (I usually imagine it being „light“ and not „sand“ though. „Sand“ is much more poignant!)
But I think people aren‘t arguing about how amazing it is, but about specific applicability. There‘s also a lot of toxic hype and FUD going around, which can be tiring and frustrating.
This is all awesome, but a bit off topic for the thread which focuses on AI for science
The disconnect here is that the cost of iteration is low and it’s relatively easy to verify the quality of a generated C program (does the compiler issue warnings or errors? Does it pass a test suite?) or a recipe (basic experience is probably enough to tell if an ingredient sends out of place or proportions are wildly off)
In science, verifying a prediction is often super difficult and/or expensive because at prediction time we’re trying to shortcut around an expensive or intractable measurement or simulation. Unreliable models can really change the tradeoff point of whether AI accelerates science or just massively inflated the burn rate