Wednesday, February 26, 2025

How Google’s New AI Co-Scientist Tool Gives Us a Taste of Tomorrow’s Workplace

Google, like many other big tech names, has already released numerous AI tools — some more generic, designed to help with a wide range of tasks, some of which are tailored to the specific needs of specialized users. Its latest effort is definitely in the latter category: the new “AI co-scientist” system, built on its Gemini 2.0 AI model, is specifically designed to “aid scientists in creating novel hypotheses and research plans.” That sounds like a very niche market—and it is. But it’s also likely to be the tip of the AI-as-coworker iceberg. In a post announcing the tool, Google explained how the new system would be used in a research setting. Essentially a scientist who has a specific topic to investigate — like, say, discovering a new drug to tackle a particular disease — would input that into the tool using natural language. The AI would then reply, much like any other chatbot, with a useful output — in this case a hypothesis that the scientist can then test to either validate or invalidate their theory. The tool also does some of the work that goes into starting a new experiment by summarizing published literature about the topic, and suggesting an experimental approach. Google’s blog post explains that the tool is actually a “multi-agent” system, tapping into what many think may be the next big thing in AI innovations. Using Gemini’s ability to reason, synthesize data and perform long-term planning, the tool roughly models the actual intellectual process scientists use when tackling a novel problem—the scientific method. In this case Google’s system uses four AI agents called Generation, Reflection, Ranking and Evolution, refining its answers over and over in what Google calls a “self-improving cycle of increasingly high-quality and novel outputs.” Essentially the tool cycles through lots of different ideas, checking how good they are and then spitting out what it thinks is the best output. Google is very careful to note that the tool is designed to be a scientific collaborator, to “help experts gather research and refine their work,” and it’s not meant to “automate the scientific process.” What this means is that the AI co-scientist isn’t designed to replace scientists, but instead may inspire researchers with novel ideas or otherwise speed up the process of investigating a thorny physics problem, or tackling a biological issue like antimicrobial resistance. The pros and cons of AI-assisted science In a previous career I worked with plenty of high-tech scientific machinery, from particle accelerators to complex computer-controlled lab equipment, using everything at my disposal to help advance my research. From this experience I can say Google’s AI tool would’ve been invaluable, saving me hours of time looking up material online and in physical texts, as well as when it came to thinking up clever ways to “break in” to a particular physics problem — the hypothesis formation and testing process at the core of scientific progress. It also seems likely other scientists will race to adopt a tool like this, because it would free up valuable time to do actual real-world experiments. For now the AI science tool is only available through a Google-led pilot testing program, to help “evaluate its strengths and limitations in science and biomedicine more broadly,” before a wider launch. I can foresee a couple of issues that may hold some researchers back, however. More traditional scientists may be reluctant to trust such a revolutionary new model for carrying out research — even if it comes at the expense of seeing their rivals embrace the technology. Scientists whose research is politically sensitive, or perhaps secret, may not trust an AI, simply because of the known issues of AI data “leakage,” where information put into the AI as queries can then emerge later on when a different user types in a prompt. Some researchers may thus be forbidden from using this sort of AI at all. Scientific research is also a highly creative process that requires feats of imagination and insight to create original work. Some researchers may be reluctant to hand over this part of the process to a machine. Nevertheless, if Google’s tool really works (and the blog post includes several examples where it’s been tested out in the real world, including helping researchers looking at liver fibrosis) scientists of all stripes may embrace having an AI coworker in the lab. In this way, the tool gives us a hint of how AI may penetrate into many different types of workplace over time, as more and more specialized AI systems are developed by Google and rival firms. The idea even taps into the thorny “will AI steal my job?” issue — with this particular example suggesting that no, it won’t: instead AI will kind of ride along with you as you work through your day, helping you as you need it to. BY KIT EATON @KITEATON

No comments: