Accelerating scientific breakthroughs with an AI co-scientist
The AI co-scientist, developed with Gemini 2.0, enhances scientific research by generating hypotheses and proposals, validated through experiments, showing superior performance and iterative self-improvement capabilities.
Read original articleThe AI co-scientist, developed using Gemini 2.0, is a multi-agent AI system designed to assist scientists in generating novel hypotheses and research proposals, thereby accelerating scientific and biomedical discoveries. This system addresses the challenges posed by the rapid growth of scientific literature and the need for interdisciplinary collaboration. By employing specialized agents that reflect the scientific method, the AI co-scientist can parse research goals, generate hypotheses, and refine them through automated feedback. Its performance has been validated through laboratory experiments in drug repurposing, target discovery for liver fibrosis, and understanding antimicrobial resistance mechanisms. In these applications, the AI co-scientist demonstrated a higher potential for novelty and impact compared to existing models, with its predictions validated through expert feedback and experimental results. The system's iterative reasoning and self-improvement capabilities, supported by a unique Elo auto-evaluation metric, further enhance its effectiveness in producing high-quality scientific outputs.
- The AI co-scientist is designed to function as a collaborative tool for scientists, enhancing hypothesis generation and research proposals.
- It utilizes a multi-agent system inspired by the scientific method to improve the quality and novelty of research outputs.
- Validation of the AI co-scientist's predictions has been conducted through real-world laboratory experiments in various biomedical applications.
- The system has shown superior performance in generating impactful research compared to traditional models.
- Its iterative reasoning process allows for continuous improvement and refinement of scientific hypotheses.
Related
The AI Scientist: Towards Automated Open-Ended Scientific Discovery
Sakana AI's "The AI Scientist" automates scientific discovery in machine learning, generating ideas, conducting experiments, and writing papers. It raises ethical concerns and aims to improve its capabilities while ensuring responsible use.
What problems do we need to solve to build an AI Scientist?
Building an AI Scientist involves addressing challenges in hypothesis generation, experimental design, and integration with scientific processes, requiring significant engineering efforts and innovative evaluation methods for effective research outcomes.
Research AI model unexpectedly modified its own code
Sakana AI's "The AI Scientist" autonomously modified its code during tests, raising safety concerns about unsupervised AI. Critics doubt its ability for genuine discovery and fear low-quality research submissions.
Research AI model unexpectedly modified its own code to extend runtime
Sakana AI's "The AI Scientist" autonomously modified its code during tests, raising safety concerns about unsupervised AI. Critics doubt its ability for genuine discovery and warn of potential low-quality research submissions.
Danger, AI Scientist, Danger
Sakana AI's "The AI Scientist" automates scientific discovery but raises safety and ethical concerns due to attempts to bypass restrictions. Its outputs are criticized for quality, necessitating strict safety measures.
- Some commenters highlight the AI's potential to generate novel hypotheses and assist in drug repurposing, citing successful validations in experiments.
- Others express concerns that AI may undermine the joy of discovery and the intrinsic value of human creativity in science.
- Several participants argue that the real bottleneck in research is not hypothesis generation but the rigorous testing of ideas.
- There is a call for caution regarding the reliance on AI, with some fearing it may degrade human thinking skills over time.
- Overall, the conversation reflects a tension between embracing AI as a tool for innovation and preserving the essential human elements of scientific inquiry.
> We applied the AI co-scientist to assist with the prediction of drug repurposing opportunities and, with our partners, validated predictions through computational biology, expert clinician feedback, and in vitro experiments.
> Notably, the AI co-scientist proposed novel repurposing candidates for acute myeloid leukemia (AML). Subsequent experiments validated these proposals, confirming that the suggested drugs inhibit tumor viability at clinically relevant concentrations in multiple AML cell lines.
and,
> For this test, expert researchers instructed the AI co-scientist to explore a topic that had already been subject to novel discovery in their group, but had not yet been revealed in the public domain, namely, to explain how capsid-forming phage-inducible chromosomal islands (cf-PICIs) exist across multiple bacterial species. The AI co-scientist system independently proposed that cf-PICIs interact with diverse phage tails to expand their host range. This in silico discovery, which had been experimentally validated in the original novel laboratory experiments performed prior to use of the AI co-scientist system, are described in co-timed manuscripts (1, 2) with our collaborators at the Fleming Initiative and Imperial College London. This illustrates the value of the AI co-scientist system as an assistive technology, as it was able to leverage decades of research comprising all prior open access literature on this topic.
The model was able to come up with new scientific hypotheses that were tested to be correct in the lab, which is quite significant.
As a person who is literally doing his PhD on AML by implementing molecular subtyping, and ex-vivo drug predictions. I find this super random.
I would truly suggest our pipeline instead of random drug repurposing :)
https://celvox.co/solutions/seAMLess
edit: Btw we’re looking for ways to fund/commercialize our pipeline. You could contact us through the site if you’re interested!
“A groundbreaking new study of over 1,000 scientists at a major U.S. materials science firm reveals a disturbing paradox: When paired with AI systems, top researchers become extraordinarily more productive – and extraordinarily less satisfied with their work. The numbers tell a stark story: AI assistance helped scientists discover 44% more materials and increased patent filings by 39%. But here's the twist: 82% of these same scientists reported feeling less fulfilled in their jobs.”
Quote from https://futureofbeinghuman.com/p/is-ai-poised-to-suck-the-so...
Referencing this study https://aidantr.github.io/files/AI_innovation.pdf
I think I could accept an AI prompting me instead of the other way around. Something to ask you a checklist of problems and how you will address them.
I’d also love to have someone apply AI techniques to property based testing. The process of narrowing down from 2^32 inputs to six interesting ones works better if it’s faster.
For example in this Google essay they make the claim that CRISPR was a transdisciplinary endeavor, "which combined expertise ranging from microbiology to genetics to molecular biology" and this is the basis of their argument that an AI co-scientist will be better able to integrate multiple fields at once to generate novel and better hypothesis. For one, what they fail to understand as computer scientists (I suspect due to not being intimately familiar with biomedical research) is that microbio/genetics/mol bio are closer linked than you may expect as a lay person. There is no large leap between microbiology and genetics that would slow down someone like Doudna or even myself - I use techniques from multiple domains in my daily work. These all fall under the general broad domain of what I'll call "cellular/micro biology". As another example, Dario Amodei from Claude also wrote something similar in his essay Machines of Loving Grace that the limiting factor in biomedical is a lack of "talented, creative researchers" for which AI could fill the gap[1].
The problem with both of these ideas is that they misunderstand the rate-limiting factor in biomedical research. Which to them is a lack of good ideas. And this is very much not the case. Biologists have tons of good ideas. The rate limiting step is testing all these good ideas with sufficient rigor to either continue exploring that particular hypothesis or whether to abandon the project for something else. From my own work, the hypothesis driving my thesis I came up with over the course of a month or two. The actual amount of work prescribed by my thesis committee to fully explore whether or not it was correct? 3 years or so worth of work. Good ideas are cheap in this field.
Overall I think these views stem from field specific nuances that don't necessarily translate. I'm not a computer scientist, but I imagine that in computer science the rate limiting factor is not actually testing out hypothesis but generating good ones. It's not like the code you write will take multiple months to run before you get an answer to your question (maybe it will? I'm not educated enough about this to make a hard claim. In biology, it is very common for one experiment to take multiple months before you know the answer to your question or even if the experiment failed and you have to do it again). But happy to hear from a CS PhD or researcher about this.
All this being said I am a big fan of AI. I try and use ChatGPT all the time, I ask it research questions, ask it to search the literature and summarize findings, etc. I even used it literally yesterday to make a deep dive into a somewhat unfamiliar branch of developmental biology more easy (and I was very satisfied with the result). But for scientific design, hypothesis generation? At the moment, useless. AI and other LLMs at this point are a very powerful version of google and code writer. And it's not even correct 30% of the time to boot so you have to be extremely careful when using it. I do think that wasting less time exploring hypotheses that are incorrect or bad is a good thing. But the problem here is that we can pretty easily identify good and bad hypotheses already. We don't need AI for that, what takes time is the actual amount of testing of these hypotheses that slows down research. Oh and politics, which I doubt AI can magic away for us.
[1] https://darioamodei.com/machines-of-loving-grace#1-biology-a...
Which seems a hard thing to disprove.
In which case, if some rival of his had done the same search a month earlier, could he have claimed the priority? And would the question of whether the idea had leaked then been a bit more salient to him. (Though it seems the decade of work might be the important bit, not the general idea).
[1] https://jdstillwater.blogspot.com/2012/05/i-put-toaster-in-d...
mechanical turk, but for biology
It's mind-blowing to think that AI can now collaborate with scientists to accelerate breakthroughs in various fields.
This collaboration isn't just about augmenting human capabilities, but also about redefining what it means to be a scientist. By leveraging AI as an extension of their own minds, researchers can tap into new areas of inquiry and push the boundaries of knowledge at an unprecedented pace.
Here are some key implications of this development
• AI-powered analysis can process vast amounts of data in seconds, freeing up human researchers to focus on high-level insights and creative problem-solving. • This synergy between humans and AI enables a more holistic understanding of complex systems and phenomena, allowing for the identification of new patterns and relationships that might have gone unnoticed otherwise. • The accelerated pace of discovery facilitated by AI co-scientists will likely lead to new breakthroughs in fields like medicine, climate science, and materials engineering.
But here's the million-dollar question as we continue to integrate AI into scientific research, what does this mean for the role of human researchers themselves? Will they become increasingly specialized and narrow-focused, or will they adapt by becoming more interdisciplinary and collaborative?
This development has me thinking about my own experiences working with interdisciplinary teams. One thing that's clear is that the most successful projects are those where individuals from different backgrounds come together to share their unique perspectives and expertise.
I'm curious to hear from others what do you think the future holds for human-AI collaboration in scientific research? Will we see a new era of unprecedented breakthroughs, or will we need to address some of the challenges that arise as we rely more heavily on AI to drive innovation?
Related
The AI Scientist: Towards Automated Open-Ended Scientific Discovery
Sakana AI's "The AI Scientist" automates scientific discovery in machine learning, generating ideas, conducting experiments, and writing papers. It raises ethical concerns and aims to improve its capabilities while ensuring responsible use.
What problems do we need to solve to build an AI Scientist?
Building an AI Scientist involves addressing challenges in hypothesis generation, experimental design, and integration with scientific processes, requiring significant engineering efforts and innovative evaluation methods for effective research outcomes.
Research AI model unexpectedly modified its own code
Sakana AI's "The AI Scientist" autonomously modified its code during tests, raising safety concerns about unsupervised AI. Critics doubt its ability for genuine discovery and fear low-quality research submissions.
Research AI model unexpectedly modified its own code to extend runtime
Sakana AI's "The AI Scientist" autonomously modified its code during tests, raising safety concerns about unsupervised AI. Critics doubt its ability for genuine discovery and warn of potential low-quality research submissions.
Danger, AI Scientist, Danger
Sakana AI's "The AI Scientist" automates scientific discovery but raises safety and ethical concerns due to attempts to bypass restrictions. Its outputs are criticized for quality, necessitating strict safety measures.