🧠 AI Pulse 10 min read

AlphaFold Changed Science. After 5 Years, It’s Still Evolving

By Sandro Iannaccone

Wednesday, December 24, 2025

🎯 Reading Level
Balanced depth and clarity
What is this?

Same facts, different comprehension paths.

  • Simple (L1): Clear language, concrete examples, short sentences
  • Standard (L2): Balanced depth with context and data
  • Advanced (L3): Full complexity, nuance, and academic rigor

All versions contain the same verified facts — only the presentation differs.

Advertisement

Ad slot: header

AlphaFold Changed Science. After 5 Years, It’s Still Evolving

AI illustration

Sandro Iannaccone Science Dec 24, 2025 5:00 AM AlphaFold Changed Science. After 5 Years, It’s Still Evolving WIRED spoke with DeepMind’s Pushmeet Kohli about the recent past—and promising future—of the Nobel Prize-winning research project that changed biology and chemistry forever. Amino acids “folded” to form a protein. Photograph: Christoph Burgstedt/Getty Images Save Story Save this story Save Story Save this story AlphaFold, the artificial intelligence system developed by Google DeepMind , has just turned five. Over the past few years, we've periodically reported on its successes ; last year, it won the Nobel Prize in Chemistry . Until AlphaFold's debut in November 2020, DeepMind had been best known for teaching an artificial intelligence to beat human champions at the ancient game of Go . Then it started playing something more serious, aiming its deep learning algorithms at one of the most difficult problems in modern science: protein folding . The result was AlphaFold2, a system capable of predicting the three-dimensional shape of proteins with atomic accuracy. Its work culminated in the compilation of a database that now contains over 200 million predicted structures, essentially the entire known protein universe, and is used by nearly 3.5 million researchers in 190 countries around the world. The Nature article published in 2021 describing the algorithm has been cited 40,000 times to date. Last year, AlphaFold 3 arrived, extending the capabilities of artificial intelligence to DNA, RNA, and drugs. That transition is not without challenges—such as “ structural hallucinations ” in the disordered regions of proteins—but it marks a step toward the future. To understand what the next five years holds for AlphaFold, WIRED spoke with Pushmeet Kohli, vice president of research at DeepMind and architect of its AI ​​for Science division. WIRED: Dr. Kohli, the arrival of AlphaFold 2 five years ago has been called "the iPhone moment" for biology. Tell us about the transition from challenges like the game of Go to a fundamental scientific problem like protein folding, and what was your role in this transition? Pushmeet Kohli: Science has been central to our mission from day one. Demis Hassabis founded Google DeepMind on the idea that AI could be the best tool ever invented for accelerating scientific discovery. Games were always a testing ground, and a way to develop techniques we knew would eventually tackle real-world problems. My role has really been about identifying and pursuing scientific problems where AI can make a transformative impact, outlining the key ingredients required to unlock progress, and bringing together a multidisciplinary team to work on these grand challenges. What AlphaGo proved was that neural networks combined with planning and search could master incredibly complex systems. Protein folding had those same characteristics. The crucial difference was that solving it would unlock discoveries across biology and medicine that could genuinely improve people's lives. We focus on what I call “root node problems,” areas where the scientific community agrees solutions would be transformative, but where conventional approaches won't get us there in the next five to 10 years. Think of it like a tree of knowledge—if you solve these root problems, you unlock entire new branches of research. Protein folding was definitely one of those. Looking ahead, I see three key areas of opportunity: building more powerful models that can truly reason and collaborate with scientists like a research partner, getting these tools into the hands of every scientist on the planet, and tackling even bolder ambitions, like creating the first accurate simulation of a complete human cell. Let's talk about hallucinations. You have repeatedly advocated the importance of a " harness " architecture, pairing a creative generative model with a rigorous verifier. How has this philosophy evolved from AlphaFold 2 to AlphaFold 3, specifically now that you are using diffusion models which are inherently more “imaginative” and prone to hallucination? The core philosophy hasn't changed—we still pair creative generation with rigorous verification. What's evolved is how we apply that principle to more ambitious problems. We've always been problem-first in our approach. We don't look for places to slot in existing techniques; we understand the problem deeply, then build whatever's needed to solve it. The shift to diffusion models in AlphaFold 3 came from what the science demanded: We needed to predict how proteins, DNA, RNA, and small molecules all interact together, not just individual protein structures. You're right to raise the hallucination concern with diffusion models being more generative. This is where verification becomes even more critical. We've built in confidence scores that signal when predictions might be less reliable, which is particularly important for intrinsically disordered proteins. But what really validates the approach is that over five years, scientists have tested AlphaFold predictions in their labs again and again. They trust it because it works in practice. You are launching the “AI co-scientist,” an agentic system built on Gemini 2.0 that generates and debates hypotheses. This sounds like the scientific method in a box. Are we moving toward a future where the “Principal Investigator” of a lab is an AI, and humans are merely the technicians verifying its experiments? What I see happening is a shift in how scientists spend their time. Scientists have always played dual roles—thinking about what problem needs solving, and then figuring out how to solve it. With AI helping more on the “how” part, scientists will have more freedom to focus on the “what,” or which questions are actually worth asking. AI can accelerate finding solutions, sometimes quite autonomously, but determining which problems deserve attention remains fundamentally human. Co-scientist is designed with this partnership in mind. It's a multi-agent system built with Gemini 2.0 that acts as a virtual collaborator: identifying research gaps, generating hypotheses, and suggesting experimental approaches. Recently, Imperial College researchers used it while studying how certain viruses hijack bacteria, which opened up new directions for tackling antimicrobial resistance. But the human scientists designed the validation experiments and grasped the significance for global health. The critical thing is understanding these tools properly, both their strengths and their limitations. That understanding is what enables scientists to use them responsibly and effectively. Can you share a concrete example—perhaps from your work on drug repurposing or bacterial evolution—where the AI agents disagreed, and that disagreement led to a better scientific outcome than a human working alone? The way the system works is quite interesting. We have multiple Gemini models acting as different agents that generate ideas, then debate and critique each other's hypotheses. The idea is that this internal back-and-forth, exploring different interpretations of the evidence, leads to more refined and creative research proposals. For example, researchers at Imperial College were investigating how certain “pirate phages”—these fascinating viruses that hijack other viruses—manage to break into bacteria. Understanding these mechanisms could open up entirely new ways of tackling drug-resistant infections, which is obviously a huge global health challenge. What Co-scientist brought to this work was the ability to rapidly analyze decades of published research and independently arrive at a hypothesis about bacterial gene transfer mechanisms that matched what the Imperial team had spent years developing and validating experimentally. What we're really seeing is that the system can dramatically compress the hypothesis generation phase—synthesizing vast amounts of literature quickly—whilst human researchers still design the experiments and understand what the findings actually mean for patients. Looking ahead to the next five years, besides proteins and materials, what is the "unsolved problem" that keeps you up at night that these tools can help with? What genuinely excites me is understanding how cells function as complete systems—and deciphering the genome is fundamental to that. DNA is the recipe book of life, proteins are the ingredients. If we can truly understand what makes us different genetically and what happens when DNA changes, we unlock extraordinary new possibilities. Not just personalized medicine, but potentially designing new enzymes to tackle climate change and other applications that extend well beyond health care. That said, simulating an entire cell is one of biology's major goals, but it's still some way off. As a first step, we need to understand the cell's innermost structure, its nucleus: precisely when each part of the genetic code is read, how the signaling molecules are produced that ultimately lead to proteins being assembled. Once we've explored the nucleus, we can work our way from the inside out. We're working toward that, but it will take several more years. If we could reliably simulate cells, we could transform medicine and biology. We could test drug candidates computationally before synthesis, understand disease mechanisms at a fundamental level, and design personalised treatments. That's really the bridge between biological simulation and clinical reality you're asking about—moving from computational predictions to actual therapies that help patients. This story originally appeared in WIRED Italia and has been translated from Italian.

📰 Full Article: This is a summary. Read the complete article at the original source →

Advertisement

Ad slot: in-article

Source

This article was originally published by Sandro Iannaccone. Read the original at wired.com

Emergent News aggregates and curates content from trusted sources to help you understand reality clearly.

Related Articles

AI-Powered Dating Is All Hype. IRL Cruising Is the Future
ai-pulse 6 min

AI-Powered Dating Is All Hype. IRL Cruising Is the Future

Dating apps and AI companies have been touting bot wingmen for months. But the future might just be good old-fashioned meet-cutes.

Dec 31, 2025 Read →
The Great Big Power Play AI illustration
ai-pulse 7 min

The Great Big Power Play

Molly Taft Science Dec 30, 2025 6:00 AM The Great Big Power Play US support for nuclear energy is soaring. Meanwhile, coal plants are on their way out and electricity-sucking data centers are meeting huge pushback. Welcome to the next front in the energy battle. PHOTO-ILLUSTRATION: WIRED STAFF; GETTY IMAGES Save Story Save this story Save Story Save this story Take yourself back to 2017. Get Out and The Shape of Water were playing in theaters, Zohran Mamdani was still known as rapper Young Cardamom, and the Trump administration, freshly in power, was eager to prop up its favored energy sources. That year, the administration introduced a series of subsidies for struggling coal-fired power plants and nuclear power plants, which were facing increasing price pressures from gas and cheap renewables. The plan would have put taxpayers on the hook for billions of dollars. It didn’t work. In subsequent years, the nuclear industry kept running into roadblocks. Three nuclear plants have shut down since 2020, while construction of two of the only four reactors started since 2000 was put on hold after a decade and billions of dollars following a political scandal . Coal, meanwhile, continued its long decline: It comprises just 17 percent of the US power mix , down from a high of 45 percent in 2010. Now, both of these energy sources are getting second chances. The difference this time is the buzz around AI , but it isn’t clear that the outcome will be much different. expired: canceling nuclear tired: canceling coal wired: the free market Read more Expired/Tired/WIRED 2025 stories here . Throughout 2025, the Trump administration has not just gone all in on promoting nuclear, but positioned it specifically as a solution to AI’s energy needs. In May, the president signed a series of executive orders intended to boost nuclear energy in the US, including ordering 10 new large reactors to be constructed by 2030. A pilot program at the Department of Energy created as a result of May’s executive orders—coupled with a serious reshuffling of the country’s nuclear regulator—has already led to breakthroughs from smaller startups. Energy secretary Chris Wright said in September that AI’s progress “will be accelerated by rapidly unlocking and deploying commercial nuclear power.” The administration’s push is mirrored by investments from tech companies. Giants like Google, Amazon, and Microsoft have inked numerous deals in recent years with nuclear companies to power data centers; Microsoft even joined the World Nuclear Association. Multiple retired reactors in the US are being considered for restarts—including two of the three that have closed in the past five years—with the tech industry supporting some of these arrangements . (This includes Microsoft’s high-profile restart of the infamous Three Mile Island, which is also being backed by a $1 billion loan from the federal government.) It’s a good time for both the private and public sectors to push nuclear: public support for nuclear power is the highest it’s been since 2010. Despite all of this, the practicalities of nuclear energy leave its future in doubt. Most of nuclear’s costs come not from onerous regulations but from construction . Critics are wary of juiced-up valuations for small modular reactor companies, especially those with deep connections to the Trump administration. An $80 billion deal the government struck with reactor giant Westinghouse in October is light on details, leaving more questions than answers for the industry. And despite high-profile tech deals that promise to get reactors up and running in a few years, the timelines remain tricky. Still, insiders say that this year marked a turning point. “Nuclear technology has been seen by proponents as the neglected and unjustly villainized hero of the energy world,” says Brett Rampal, a nuclear power expert who advises investors. “Now, full-throated support from the president, Congress, tech companies, and the common person feels like generational restitution and a return to meritocracy.” Nuclear isn’t the only form of energy that seems to be getting a second start thanks to AI. In April, President Trump signed a series of executive orders to boost US coal to power AI; Wright has since ordered two plants that were slated to be retired to stay online via emergency order. The administration has also scrambled to make it easier to run coal plants, in particular focusing on doing away with pollution regulation. These efforts—and the endless demand for energy from AI—may have extended a lifeline to coal: More than two dozen generating units that were scheduled to retire across the country are now staying online , separate from Wright’s order, with some getting yearslong reprieves. A complete recovery for the industry, however, is still an open question. A recent analysis of the US power sector finds that almost all of the 10 largest utilities in the US are significantly slashing their reliance on coal. (Many of these utilities, the analysis shows, have been looking to replace coal-fired power with more nuclear.) Part of what may keep coal on its downward track in the US—albeit with an extended lifeline—is simply its bad PR. The tech of the future, after all, isn’t supposed to pollute the air and drive temperatures up; while AI has significantly set Big Tech back from its climate-change goals, these companies are theoretically still committed to not frying the planet. And while tech giants are scrambling to align themselves with nuclear, which does not produce direct carbon emissions, no big companies have openly partnered with a struggling coal plant or splashed out a press release about how they’re seeking to produce more energy from coal. (Some retired coal plants are being proposed as sites for data centers, powered by natural gas.) Some companies are trying to develop technologies that would capture carbon emissions from coal plants, but outlook for those technologies is bearish following some high-profile failures. “Emissions [are] always going to factor into the discussion” for investors, says Rampal. The Oval Office playing favorites with energy sources doesn’t mean that it can defeat the market. Utility-scale solar and onshore wind remain some of the cheapest forms of energy around, even without government subsidies. And while Washington looks backward, other countries are continuing massive buildouts of renewable energy. China’s emissions have taken a nosedive over the past 18 months, thanks in large part to a huge expansion of renewable energy. Coal’s use in the power sector is declining due to competition from renewables, while nuclear made up only a small slice of total power use. If the administration’s goal is to defeat China on AI, it might want to start by taking a look at its energy playbook.

Dec 30, 2025 Read →
3 New Tricks to Try With Google Gemini Live After Its Latest Major Upgrade AI illustration
ai-pulse 5 min

3 New Tricks to Try With Google Gemini Live After Its Latest Major Upgrade

David Nield Gear Dec 29, 2025 6:00 AM 3 New Tricks to Try With Google Gemini Live After Its Latest Major Upgrade Google's AI is now even smarter, and more versatile. Photo-Illustration: Wired Staff; Getty Images Comment Loader Save Story Save this story Comment Loader Save Story Save this story Gemini Live is the more conversational, natural language way of interacting with the Google Gemini AI bot using your voice. The idea is you chat with it like you would chat with a friend, interruptions and all, even if the actual answers are the same as you'd get from typing your queries into Gemini as normal. Now, about a year and a half after its debut, Gemini Live has been given what Google is describing as its “biggest update ever.” The update makes the Gemini Live mode even more natural and even more conversational than before, with a better understanding of tone, nuance, pronunciation, and rhythm. There's no real visible indication that anything has changed, and often a lot of the responses will seem the same as before too. However, there are certain areas where you can tell the difference that the latest upgrade has made—and so here's how to make the most of the new and improved Gemini Live. The update is rolling out now for Gemini on Android and iOS . To access Gemini Live, launch the Gemini app, then tap the Live button in the lower right hand corner (it looks vaguely like a sound wave) and start talking. The Gemini Live interface. David Nield Hear Some Stories Gemini Live can now add more feeling and variation to its storytelling capabilities—which can be useful for history lessons, bedtimes for the children, and creative brainstorming. The AI will even add in different accents and tones where appropriate, to help you distinguish between the characters and scenes. One of Google's own examples for how this works best is to get Gemini Live to tell you the story of the Roman Empire from the perspective of Julius Caesar. It's a challenge for Gemini that requires some leaps in perspective and imagination, and to use tone and style appropriately in a way that Gemini Live should now be better at. You don't have to restrict yourself to Julius Caesar or the Roman Empire either. You could get Gemini Live to give you a retelling of Pride and Prejudice from the perspective of each different Bennett sister, for example, or have the AI spin up a tale of what life would have been like in your part of the world 100, 200, or 300 years ago. Learn Some Skills Another area where Gemini Live's new capabilities make a noticeable difference is in educating and explaining: You can get it to give you a crash course (or a longer tutorial) on any topic of your choosing, anything from the intricacies of human genetics to the best ways to clean a carpet. You can even get Gemini Live to teach you a language . The AI can now go at a pace to suit you, which is particularly useful when you're trying to learn something new. If you need Gemini Live to slow down, speed up, or repeat something, then just say so. If you've only got a certain amount of time spare, let Gemini know when you're chatting to it. As usual, be wary of AI hallucinations , and maybe don't trust that everything you hear is fully accurate or verified. If you're wanting to learn something like how to rewire the lighting in your home or fix a problematic car engine, double-check the guidance you're getting with other sources, but Gemini Live is at least a useful starting point. Test Some Accents One of the new skills that Gemini Live has with this latest update is the ability to speak in different accents. Perhaps you want the history of the Wild West spoken by a cowboy, or you need the intricacies of the British Royal Family explained by someone with an authentic London accent. Gemini Live can now handle these requests. This extends to the language learning mentioned above, because you can hear words and phrases spoken as they would be by native speakers—and then try to copy the pronunciation and phrasing. While Gemini Live doesn't cover every language and accent across the globe, it can access plenty of them. There are certain safeguards built into Gemini Live here, and your requests might get refused if you veer too close to derogatory uses of accents and speech, or if you're trying to impersonate real people. However, it's another fun way to test out the AI, and to get responses that are more varied and personalized.

Dec 29, 2025 Read →