I read the cf-PICI paper (abstract) and the hypothesis from the AI co-scientist. While the mechanism from the actual paper is pretty cool (if I'm understanding it correctly), I'm not particularly impressed with the hypothesis from the co-scientist.
It's quite a natural next step to take to consider the tails and binding partners to them, so much so that it's probably what I would have done and I have a background of about 20 minutes in this particular area. If the co-scientist had hypothesised the novel mechanism to start with, then I would be impressed at the intelligence of it. I would bet that there were enough hints towards these next steps in the discussion sections of the referenced papers anyway.
What's a bit suspicious is in the Supplementary Information, around where the hypothesis is laid out, it says "In addition, our own preliminary data indicate that cf-PICI capsids can indeed interact with tails from multiple phage types, providing further impetus for this research direction." (Page 35). A bit weird that it uses "our own preliminary data".
> A bit weird that it uses "our own preliminary data"
I think potential of LLM based analysis is sky high given the amount of concurrent research happening and high context load required to understand the papers. However there is a lot of pressure to show how amazing AI is and we should be vigilant. So, my first thought was - could it be that training data / context / RAG having access to a file it should not have contaminated the result? This is indirect evidence that maybe something was leaked.