logoalt Hacker News

Accelerating scientific breakthroughs with an AI co-scientist

354 pointsby Jimmc414last Wednesday at 2:32 PM189 commentsview on HN

Comments

crypto420last Wednesday at 4:56 PM

I'm not sure if people here even read the entirety of the article. From the article:

> We applied the AI co-scientist to assist with the prediction of drug repurposing opportunities and, with our partners, validated predictions through computational biology, expert clinician feedback, and in vitro experiments.

> Notably, the AI co-scientist proposed novel repurposing candidates for acute myeloid leukemia (AML). Subsequent experiments validated these proposals, confirming that the suggested drugs inhibit tumor viability at clinically relevant concentrations in multiple AML cell lines.

and,

> For this test, expert researchers instructed the AI co-scientist to explore a topic that had already been subject to novel discovery in their group, but had not yet been revealed in the public domain, namely, to explain how capsid-forming phage-inducible chromosomal islands (cf-PICIs) exist across multiple bacterial species. The AI co-scientist system independently proposed that cf-PICIs interact with diverse phage tails to expand their host range. This in silico discovery, which had been experimentally validated in the original novel laboratory experiments performed prior to use of the AI co-scientist system, are described in co-timed manuscripts (1, 2) with our collaborators at the Fleming Initiative and Imperial College London. This illustrates the value of the AI co-scientist system as an assistive technology, as it was able to leverage decades of research comprising all prior open access literature on this topic.

The model was able to come up with new scientific hypotheses that were tested to be correct in the lab, which is quite significant.

show 10 replies
celltalklast Wednesday at 5:20 PM

“Drug repurposing for AML” lol

As a person who is literally doing his PhD on AML by implementing molecular subtyping, and ex-vivo drug predictions. I find this super random.

I would truly suggest our pipeline instead of random drug repurposing :)

https://celvox.co/solutions/seAMLess

edit: Btw we’re looking for ways to fund/commercialize our pipeline. You could contact us through the site if you’re interested!

show 3 replies
mnky9800nlast Wednesday at 2:54 PM

Tbh I don’t see why I would use this. I don’t need an ai to connect across ideas or come up with new hypothesis. I need it to write lots of data pipeline code to take data that is organized by project, each in a unique way, each with its own set of multimodal data plus metadata all stored in long form documents with no regular formatting, and normalize it all into a giant database. I need it to write and test a data pipeline to detect events both in amplitude space and frequency space in acoustic data. I need it to test out front ends for these data analysis backends so i can play with the data. Like I think this is domain specific. Probably drug discovery requires testing tons of variables one by one iterating through the values available. But that’s not true for my research. But not everything is for everybody and that’s okay.

show 6 replies
jjk166yesterday at 5:39 PM

This is in line with how I've been using AI in my workflow recently. I give it a summary of my findings thus far and ask it to suggest explanations and recommend further tests I should conduct. About 70% of its ideas are dumb and sometimes I need to give it a little extra prompting, but it does spit out ideas that hadn't occurred to me which make sense. Obviously it's not going to replace a knowledgeable human, but as a tool to assist that human it has outperformed some very expensive PhD level consultants.

quinnjhlast Wednesday at 5:30 PM

The market seems excited to charge in whatever direction the weathervane has last been pointing, regardless of the real outcomes of running in that direction. Hopefully I’m wrong, but it reminds me very much of this study (I’ll quote a paraphrase)

“A groundbreaking new study of over 1,000 scientists at a major U.S. materials science firm reveals a disturbing paradox: When paired with AI systems, top researchers become extraordinarily more productive – and extraordinarily less satisfied with their work. The numbers tell a stark story: AI assistance helped scientists discover 44% more materials and increased patent filings by 39%. But here's the twist: 82% of these same scientists reported feeling less fulfilled in their jobs.”

Quote from https://futureofbeinghuman.com/p/is-ai-poised-to-suck-the-so...

Referencing this study https://aidantr.github.io/files/AI_innovation.pdf

show 4 replies
azinman2last Wednesday at 5:04 PM

It seems in general we’re heading toward’s Minsky’s society of minds concept. I know OpenAI is wanting to collapse all their models into a single omni model that can do it all, but I wonder if under the hood it’d just be about routing. It’d make sense to me for agents to specialize in certain tool calls, ways of thinking, etc that as a conceptual framework/scaffolding provides a useful direction.

show 3 replies
ZeroGravitasyesterday at 3:45 PM

I read the scientist quote in a newspaper article first and the surprise seemed to hinge on his entire team working on the problem for a decade and not publishing anything in a way that AI could gobble it up (which seemed strange to me) and no other human researcher working on the problem over that decade-ish timespan publishing anything suggesting the same idea.

Which seems a hard thing to disprove.

In which case, if some rival of his had done the same search a month earlier, could he have claimed the priority? And would the question of whether the idea had leaked then been a bit more salient to him. (Though it seems the decade of work might be the important bit, not the general idea).

hinkleylast Wednesday at 7:46 PM

I am generally down on AI these days but I still remember using Eliza for the first time.

I think I could accept an AI prompting me instead of the other way around. Something to ask you a checklist of problems and how you will address them.

I’d also love to have someone apply AI techniques to property based testing. The process of narrowing down from 2^32 inputs to six interesting ones works better if it’s faster.

show 1 reply
dr_kretynyesterday at 1:45 AM

Interesting set of comments. Personally - fantastic! It's a co-scientist and not a "scientist". There's enormous value in reviewing work and ranking "what" might provide some interesting output. A lot of ideas are not even considered because they aren't "common" because their components are expensive. If there's a "reasonable expectation" then there's a lower risk of failure. I'm not a scientist "anymore" but I'd love to play with this and see what odd combinations might potentially produce.

stanford_labratlast Wednesday at 5:45 PM

So I'm a biomedical scientist (in training I suppose...I'm in my 3rd year of a Genetics PhD) and I have seen this trend a couple of times now where AI developers tout that AI will accelerate biomedical discovery through a very specific argument that AI will be smarter and generate better hypotheses than humans.

For example in this Google essay they make the claim that CRISPR was a transdisciplinary endeavor, "which combined expertise ranging from microbiology to genetics to molecular biology" and this is the basis of their argument that an AI co-scientist will be better able to integrate multiple fields at once to generate novel and better hypothesis. For one, what they fail to understand as computer scientists (I suspect due to not being intimately familiar with biomedical research) is that microbio/genetics/mol bio are closer linked than you may expect as a lay person. There is no large leap between microbiology and genetics that would slow down someone like Doudna or even myself - I use techniques from multiple domains in my daily work. These all fall under the general broad domain of what I'll call "cellular/micro biology". As another example, Dario Amodei from Claude also wrote something similar in his essay Machines of Loving Grace that the limiting factor in biomedical is a lack of "talented, creative researchers" for which AI could fill the gap[1].

The problem with both of these ideas is that they misunderstand the rate-limiting factor in biomedical research. Which to them is a lack of good ideas. And this is very much not the case. Biologists have tons of good ideas. The rate limiting step is testing all these good ideas with sufficient rigor to either continue exploring that particular hypothesis or whether to abandon the project for something else. From my own work, the hypothesis driving my thesis I came up with over the course of a month or two. The actual amount of work prescribed by my thesis committee to fully explore whether or not it was correct? 3 years or so worth of work. Good ideas are cheap in this field.

Overall I think these views stem from field specific nuances that don't necessarily translate. I'm not a computer scientist, but I imagine that in computer science the rate limiting factor is not actually testing out hypothesis but generating good ones. It's not like the code you write will take multiple months to run before you get an answer to your question (maybe it will? I'm not educated enough about this to make a hard claim. In biology, it is very common for one experiment to take multiple months before you know the answer to your question or even if the experiment failed and you have to do it again). But happy to hear from a CS PhD or researcher about this.

All this being said I am a big fan of AI. I try and use ChatGPT all the time, I ask it research questions, ask it to search the literature and summarize findings, etc. I even used it literally yesterday to make a deep dive into a somewhat unfamiliar branch of developmental biology more easy (and I was very satisfied with the result). But for scientific design, hypothesis generation? At the moment, useless. AI and other LLMs at this point are a very powerful version of google and code writer. And it's not even correct 30% of the time to boot so you have to be extremely careful when using it. I do think that wasting less time exploring hypotheses that are incorrect or bad is a good thing. But the problem here is that we can pretty easily identify good and bad hypotheses already. We don't need AI for that, what takes time is the actual amount of testing of these hypotheses that slows down research. Oh and politics, which I doubt AI can magic away for us.

[1] https://darioamodei.com/machines-of-loving-grace#1-biology-a...

show 1 reply
bjarlssonlast Wednesday at 6:25 PM

This is marketing material from Google and people are accepting the premises uncritically.

show 1 reply
pazimzadehyesterday at 8:40 AM

I don't think generating hypotheses is where AI is useful, I think it's more useful for doing back of napkin (or more serious) calculations, helping to find protocols, sourcing literature, etc. Grunt work basically. Generating hypotheses is the fun, exciting part that I doubt scientists want to outsource to AI.

show 1 reply
anu7dfyesterday at 5:06 AM

Oh great.. This looks exactly like some PhD advisors I've heard of. Creating a list of "ideas" and having their lab monkey PhD students work on it. Surefire way to kill joy of discovery and passion for science. Nice going google :). Also, while validating with "in silico" discovery I would like it to be double blind. If I know the idea and its final outcome the prompts I give are vastly different from if I did not.

writeslowlylast Wednesday at 7:17 PM

I recently ran across this toaster-in-dishwasher article [1] again and was disappointed that the LLMs I have access to could replicate the "hairdryer-in-aquarium" breakthrough (or the toaster-in-dishwasher scenario, although I haven't explored it as much), which has made me a bit skeptical of the ability of LLMs to do novel research. Maybe the new OpenAI research AI is smart enough to figure it out?

[1] https://jdstillwater.blogspot.com/2012/05/i-put-toaster-in-d...

show 1 reply
waynenilsenlast Wednesday at 5:21 PM

it seems that humans may become the hands of the AI before the robots are ready

mechanical turk, but for biology

show 1 reply
insane_dreamerlast Wednesday at 9:50 PM

Seems like the primary value-add is to speed up the literature review process during the hypothesis formulation process.

inglor_czyesterday at 8:26 AM

Loosely connected: I read recently about an Austrian pilot project in Wien, where they analyze patients' tumors and let AI suggest treatment. They had some spectacular successes, where the AI would recommend drugs that aren't normally used for that type of cancer, but when deployed, worked well.

m3kw9last Wednesday at 5:37 PM

I really would like to see a genuine breakthru amongst all this talk about AI doing that

curtisszmaniayesterday at 4:18 AM

Here's my take on the post from Jimmc414

It's mind-blowing to think that AI can now collaborate with scientists to accelerate breakthroughs in various fields.

This collaboration isn't just about augmenting human capabilities, but also about redefining what it means to be a scientist. By leveraging AI as an extension of their own minds, researchers can tap into new areas of inquiry and push the boundaries of knowledge at an unprecedented pace.

Here are some key implications of this development

• AI-powered analysis can process vast amounts of data in seconds, freeing up human researchers to focus on high-level insights and creative problem-solving. • This synergy between humans and AI enables a more holistic understanding of complex systems and phenomena, allowing for the identification of new patterns and relationships that might have gone unnoticed otherwise. • The accelerated pace of discovery facilitated by AI co-scientists will likely lead to new breakthroughs in fields like medicine, climate science, and materials engineering.

But here's the million-dollar question as we continue to integrate AI into scientific research, what does this mean for the role of human researchers themselves? Will they become increasingly specialized and narrow-focused, or will they adapt by becoming more interdisciplinary and collaborative?

This development has me thinking about my own experiences working with interdisciplinary teams. One thing that's clear is that the most successful projects are those where individuals from different backgrounds come together to share their unique perspectives and expertise.

I'm curious to hear from others what do you think the future holds for human-AI collaboration in scientific research? Will we see a new era of unprecedented breakthroughs, or will we need to address some of the challenges that arise as we rely more heavily on AI to drive innovation?

cratermoonyesterday at 3:58 AM

meanwhile: OpenAI Deep Research - Six Strange Failures: <https://futuresearch.ai/oaidr-feb-2025>

ThouYSlast Wednesday at 5:04 PM

I guess we do live in the fast take off world

ACV001last Wednesday at 5:03 PM

Just as the invention of writing degraded human memory (before that they memorized whole stories, poems), with the advent of AI, humans will degrade their thinking skills and knowledge in general.

show 1 reply