🧠 AI Pulse 14 min read

How AI broke the smart home in 2025

By Jennifer Pattison Tuohy

Tuesday, December 23, 2025

🎯 Reading Level
Balanced depth and clarity
What is this?

Same facts, different comprehension paths.

  • Simple (L1): Clear language, concrete examples, short sentences
  • Standard (L2): Balanced depth with context and data
  • Advanced (L3): Full complexity, nuance, and academic rigor

All versions contain the same verified facts — only the presentation differs.

Advertisement

Ad slot: header

How AI broke the smart home in 2025

AI illustration

Tech Close Tech Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Tech AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Amazon Close Amazon Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Amazon How AI broke the smart home in 2025 The arrival of generative AI assistants in our smart homes held such promise; instead, they struggle to turn on the lights. If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement. by Jennifer Pattison Tuohy Close Jennifer Pattison Tuohy Senior Reviewer, Smart Home Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Jennifer Pattison Tuohy Dec 23, 2025, 1:30 PM UTC Link Share If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement. Image: Cath Virginia / The Verge, Getty Images Part Of The Verge’s 2025 in review see all Jennifer Pattison Tuohy Close Jennifer Pattison Tuohy Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Jennifer Pattison Tuohy is a senior reviewer with over twenty years of experience. She covers smart home, IoT, and connected tech, and has written previously for Wirecutter , Wired , Dwell , BBC , and US News . This morning, I asked my Alexa-enabled Bosch coffee machin e to make me a coffee. Instead of running my routine , it told me it couldn’t do that. Ever since I upgraded to Alexa Plus, Amazon’s generative-AI-powered voice assistant, it has failed to reliably run my coffee routine, coming up with a different excuse almost every time I ask. It’s 2025, and AI still can’t reliably control my smart home. I’m beginning to wonder if it ever will. The potential for generative AI and large language models to take the complexity out of the smart home, making it easier to set up, use, and manage connected devices, is compelling. So is the promise of a “ new intelligence layer ” that could unlock a proactive, ambient home. But this year has shown me that we are a long way from any of that. Instead, our reliable but limited voice assistants have been replaced with “smarter” versions that, while better conversationalists, can’t consistently do basic tasks like operating appliances and turning on the lights. I want to know why. I’m still waiting on the promise of voice assistants that can seamlessly control my smart home. Photo by Jennifer Pattison Tuohy / The Verge This wasn’t the future we were promised. It was back in 2023, during an interview with Dave Limp , that I first became intrigued by the possibilities of generative AI and large language models for improving the smart home experience. Limp, then the head of Amazon’s Devices & Services division that oversees Alexa, was describing the capabilities of the new Alexa they were soon to launch (spoiler alert: it wasn’t soon ). Along with a more conversational assistant that could actually understand what you said no matter how you said it, what stood out to me was the promise that this new Alexa could use its knowledge of the devices in your smart home, combined with the hundreds of APIs they plugged into it, to give the assistant the context it needed to make your smart home easier to use. From setting up devices to controlling them, unlocking all their features, and managing how they can interact with other devices, a smarter smart home assistant seemed to hold the potential to not only make it easier for enthusiasts to manage their gadgets but also make it easier for everyone to enjoy the benefits of the smart home. Related The problems with AI in the smart home I tasked Alexa Plus with tackling my to-do list — it was hit or miss I let Gemini watch my family for the weekend — it got weird AI companies want a new internet — and they think they’ve found the key AI can’t even turn on the lights Fast-forward three years, and the most useful smart home AI upgrade we have is AI-powered descriptions for security camera notifications . It’s handy, but it’s hardly the sea change I had hoped for. It’s not that these new smart home assistants are a complete failure. There’s a lot I like about Alexa Plus; I even named it as my smart home software pick of the year . It is more conversational, understands natural language , and can answer many more random questions than the old Alexa. While it sometimes struggles with basic commands, it can understand complex ones; saying “I want it dimmer in here and warmer” will adjust the lights and crank up the thermostat. It’s better at managing my calendar, helping me cook, and other home-focused features. Setting up routines with voice is a huge improvement over wrestling with the Alexa app — even if running them isn’t as reliable. Google’s new Gemini for Home AI-powered smart home assistant won’t fully launch until next spring, when its new smart speaker arrives. Photo by Jennifer Pattison Tuohy / The Verge Google has promised similar capabilities with its Gemini for Home upgrade to its smart speakers, although that’s rolling out at a glacial pace , and I haven’t been able to try it beyond some on-the-rails demos . I was able to test Gemini for Home’s feature that attempts to summarize what’s happened at my home using AI-generated text descriptions from Nest camera footage. It was wildly inaccurate . As for Apple’s Siri, it’s still firmly stuck in the last decade of voice assistants, and it appears it will stay there for a while longer . The problem is that the new assistants aren’t as consistent at controlling smart home devices as the old ones. While they were often frustrating to use, the old Alexa and Google Assistant (and the current Siri) would generally always turn on the lights when you asked them to, provided you used precise nomenclature. Today, their “upgraded” counterparts struggle with consistency in basic functions like turning on the lights , setting timers, reporting on the weather, playing music , and running the routines and automations on which many of us have built our smart homes. I’ve noticed this in my testing, and online forums are full of users who have encountered it. Amazon and Google have acknowledged the struggles they’ve had in making their revamped generative AI-powered assistants reliably perform basic tasks . And it’s not limited to smart home assistants; ChatGPT can’t consistently tell time or count. Why is this, and will it ever get better? To understand the problem, I spoke with two professors in the field of human-centric artificial intelligence with experience with agentic AI and smart home systems. My takeaway from those conversations is that, while it’s possible to make these new voice assistants do almost exactly what the old ones did, it will take a lot of work, and that’s possibly work most companies just aren’t interested in doing. Basically, we’re all beta testers for the AI. Considering there are limited resources in this field and ample opportunity to do something much more exciting (and more profitable) than reliably turn on the lights, that’s the way they’re moving, according to experts I spoke with. Given all these factors, it seems the easiest way to improve the technology is to just deploy it in the real world and let it improve over time. Which is likely why Alexa Plus and Gemini for Home are in “early access” phases. Basically, we’re all beta testers for the AI. The bad news is it could be a while until it gets better. In his research, Dhruv Jain , assistant professor of Computer Science & Engineering at the University of Michigan and director of the Soundability Lab , has also found that newer models of smart home assistants are less reliable. “It’s more conversational, people like it, people like to talk to it, but it’s not as good as the previous one,” he says. “I think [tech companies’] model has always been to release it fairly fast, collect data, and improve on it. So, over a few years, we might get a better model, but at the cost of those few years of people wrestling with it.” The Alexa that launched in 2014 on the original Echo smart speaker isn’t capable enough for the future Amazon is working toward. Image: Amazon The inherent problem appears to be that the old and new technologies don’t mesh. So, to build their new voice assistants, Amazon, Google, and Apple have had to throw out the old and build something entirely new . However, they quickly discovered that these new LLMs were not designed for the predictability and repetitiveness that their predecessors excelled at. “It was not as trivial an upgrade as everyone originally thought,” says Mark Riedl, a professor at the School of Interactive Computing at Georgia Tech . “LLMs understand a lot more and are open to more arbitrary ways to communicate, which then opens them to interpretation and interpretation mistakes.” Basically, LLMs just aren’t designed to do what prior command-and-control-style voice assistants did. “Those voice assistants are what we call ‘template matchers,’” explains Riedl. “They look for a keyword, when they see it, they know that there are one to three additional words to expect.” For example, you say “Play radio,” and they know to expect a station call code next. “It was not as trivial an upgrade as everyone originally thought.” — Mark Riedl LLMs, on the other hand, “bring in a lot of stochasticity — randomness,” explains Riedl. Asking ChatGPT the same prompt multiple times may produce multiple responses. This is part of their value, but it’s also why when you ask your LLM-powered voice assistant to do the same thing you asked it yesterday, it might not respond the same way. “This randomness can lead to misunderstanding basic commands because sometimes they try to overthink things too much,” he says. To fix this, companies like Amazon and Google have develop

TechCloseTechPosts from this topic will be added to your daily email digest and your homepage feed. AICloseAIPosts from this topic will be added to your daily email digest and your homepage feed. AmazonCloseAmazonPosts from this topic will be added to your daily email digest and your homepage feed. The arrival of generative AI assistants in our smart homes held such promise; instead, they struggle to turn on the lights. If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement. Jennifer Pattison TuohyCloseJennifer Pattison TuohySenior Reviewer, Smart HomePosts from this author will be added to your daily email digest and your homepage feed. ShareIf you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement. Jennifer Pattison TuohyCloseJennifer Pattison TuohyPosts from this author will be added to your daily email digest and your homepage feed. This morning, I asked my Alexa-enabled Bosch coffee machine to make me a coffee. Instead of running my routine, it told me it couldn’t do that. Ever since I upgraded to Alexa Plus, Amazon’s generative-AI-powered voice assistant, it has failed to reliably run my coffee routine, coming up with a different excuse almost every time I ask. It’s 2025, and AI still can’t reliably control my smart home. I’m beginning to wonder if it ever will. The potential for generative AI and large language models to take the complexity out of the smart home, making it easier to set up, use, and manage connected devices, is compelling. So is the promise of a “new intelligence layer” that could unlock a proactive, ambient home. But this year has shown me that we are a long way from any of that. Instead, our reliable but limited voice assistants have been replaced with “smarter” versions that, while better conversationalists, can’t consistently do basic tasks like operating appliances and turning on the lights. I want to know why. It was back in 2023, during an interview with Dave Limp, that I first became intrigued by the possibilities of generative AI and large language models for improving the smart home experience. Limp, then the head of Amazon’s Devices & Services division that oversees Alexa, was describing the capabilities of the new Alexa they were soon to launch (spoiler alert: it wasn’t soon). Along with a more conversational assistant that could actually understand what you said no matter how you said it, what stood out to me was the promise that this new Alexa could use its knowledge of the devices in your smart home, combined with the hundreds of APIs they plugged into it, to give the assistant the context it needed to make your smart home easier to use. From setting up devices to controlling them, unlocking all their features, and managing how they can interact with other devices, a smarter smart home assistant seemed to hold the potential to not only make it easier for enthusiasts to manage their gadgets but also make it easier for everyone to enjoy the benefits of the smart home. Fast-forward three years, and the most useful smart home AI upgrade we have is AI-powered descriptions for security camera notifications. It’s handy, but it’s hardly the sea change I had hoped for. It’s not that these new smart home assistants are a complete failure. There’s a lot I like about Alexa Plus; I even named it as my smart home software pick of the year. It is more conversational, understands natural language, and can answer many more random questions than the old Alexa. While it sometimes struggles with basic commands, it can understand complex ones; saying “I want it dimmer in here and warmer” will adjust the lights and crank up the thermostat. It’s better at managing my calendar, helping me cook, and other home-focused features. Setting up routines with voice is a huge improvement over wrestling with the Alexa app — even if running them isn’t as reliable. Google has promised similar capabilities with its Gemini for Home upgrade to its smart speakers, although that’s rolling out at a glacial pace, and I haven’t been able to try it beyond some on-the-rails demos. I was able to test Gemini for Home’s feature that attempts to summarize what’s happened at my home using AI-generated text descriptions from Nest camera footage. It was wildly inaccurate. As for Apple’s Siri, it’s still firmly stuck in the last decade of voice assistants, and it appears it will stay there for a while longer. The problem is that the new assistants aren’t as consistent at controlling smart home devices as the old ones. While they were often frustrating to use, the old Alexa and Google Assistant (and the current Siri) would generally always turn on the lights when you asked them to, provided you used precise nomenclature. Today, their “upgraded” counterparts struggle with consistency in basic functions like turning on the lights, setting timers, reporting on the weather, playing music, and running the routines and automations on which many of us have built our smart homes. I’ve noticed this in my testing, and online forums are full of users who have encountered it. Amazon and Google have acknowledged the struggles they’ve had in making their revamped generative AI-powered assistants reliably perform basic tasks. And it’s not limited to smart home assistants; ChatGPT can’t consistently tell time or count. Why is this, and will it ever get better? To understand the problem, I spoke with two professors in the field of human-centric artificial intelligence with experience with agentic AI and smart home systems. My takeaway from those conversations is that, while it’s possible to make these new voice assistants do almost exactly what the old ones did, it will take a lot of work, and that’s possibly work most companies just aren’t interested in doing. Considering there are limited resources in this field and ample opportunity to do something much more exciting (and more profitable) than reliably turn on the lights, that’s the way they’re moving, according to experts I spoke with. Given all these factors, it seems the easiest way to improve the technology is to just deploy it in the real world and let it improve over time. Which is likely why Alexa Plus and Gemini for Home are in “early access” phases. Basically, we’re all beta testers for the AI. The bad news is it could be a while until it gets better. In his research, Dhruv Jain, assistant professor of Computer Science & Engineering at the University of Michigan and director of the Soundability Lab, has also found that newer models of smart home assistants are less reliable. “It’s more conversational, people like it, people like to talk to it, but it’s not as good as the previous one,” he says. “I think [tech companies’] model has always been to release it fairly fast, collect data, and improve on it. So, over a few years, we might get a better model, but at the cost of those few years of people wrestling with it.” The inherent problem appears to be that the old and new technologies don’t mesh. So, to build their new voice assistants, Amazon, Google, and Apple have had to throw out the old and build something entirely new. However, they quickly discovered that these new LLMs were not designed for the predictability and repetitiveness that their predecessors excelled at. “It was not as trivial an upgrade as everyone originally thought,” says Mark Riedl, a professor at the School of Interactive Computing at Georgia Tech. “LLMs understand a lot more and are open to more arbitrary ways to communicate, which then opens them to interpretation and interpretation mistakes.” Basically, LLMs just aren’t designed to do what prior command-and-control-style voice assistants did. “Those voice assistants are what we call ‘template matchers,’” explains Riedl. “They look for a keyword, when they see it, they know that there are one to three additional words to expect.” For example, you say “Play radio,” and they know to expect a station call code next. “It was not as trivial an upgrade as everyone originally thought.” LLMs, on the other hand, “bring in a lot of stochasticity — randomness,” explains Riedl. Asking ChatGPT the same prompt multiple times may produce multiple responses. This is part of their value, but it’s also why when you ask your LLM-powered voice assistant to do the same thing you asked it yesterday, it might not respond the same way. “This randomness can lead to misunderstanding basic commands because sometimes they try to overthink things too much,” he says. To fix this, companies like Amazon and Google have developed ways to integrate LLMs with the APIs at the heart of our smart homes (and most of everything we do on the web). But this has potentially created a new problem. “The LLMs now have to compose a function call to an API, and it has to work a whole lot harder to correctly create the syntax to get the call exactly right,” Riedl posits. Where the old systems just waited for the keyword, LLM-powered assistants now have to lay out an entire code sequence that the API can recognize. “It has to keep all that in memory, and it’s another place where it can make mistakes.” All of this is a scientific way of explaining why my coffee machine sometimes won’t make me a cup of coffee, or why you might run into trouble getting Alexa or Google’s assistant to do something it used to do just fine. So, why did these companies abandon a technology that worked for something that doesn’t? Because of its potential. A voice assistant that, rather than being limited to responding to specific inputs, can understand natural language and take action based on that understanding is infinitely more capable. “What all the companies that make Alexa and Siri and things like that really want to do is chaining of services,” explains Riedl. “That’s where you want a general language understanding, something that can understand complex relationships through tasks and how they’re conveyed by speech. They can invent the if-else statements that chain everything together, on the fly, and dynamically generate the sequence.” They can become agentic. “The question is whether … the expanded range of possibilities the new technology offers is worth more than a 100 percent accurate non-probabilistic model.” This is why you throw away the old technology, says Riedl, because it had no chance of doing this. “It’s about the cost-benefit ratio,” says Jain. “[The new technology] is not ever going to be as accurate at this as the non-probabilistic technology before, but the question is whether that sufficiently high accuracy, plus the expanded range of possibilities the new technology offers, is worth more than a 100 percent accurate non-probabilistic model.” One solution is to use multiple models to power these assistants. Google’s Gemini for Home consists of two separate systems: Gemini and Gemini Live. Anish Kattukaran, head of product at Google Home and Nest, says the aim is to eventually have the more powerful Gemini Live run everything, but today, the more tightly constrained Gemini for Home is in charge. Amazon similarly uses multiple models to balance its various capabilities. But it’s an imperfect solution that has led to inconsistency and confusion in our smart homes. Riedl says that no one has really figured out how to train LLMs to understand when to be very precise and when to embrace randomness, meaning even the “tame” LLMs can still get things wrong. “If you wanted to have a machine that just was never random at all, you could tamp it all down,” says Riedl. But that same chatbot would not be more conversational or able to tell your kid fantastical bedtime stories — both capabilities that Alexa and Google are touting. “If you want it all in one, you’re really making some tradeoffs.” These struggles in its deployment in the smart home could be a harbinger of broader issues for the technology. If AI can’t turn on the lights reliably, why should anyone rely on it to do more complex tasks, asks Riedl. “You have to walk before you can run.” But tech companies are known for their propensity to move fast and break things. “The story of language models has always been about taming the LLMs,” says Riedl. “Over time, they become more tame, more reliable, more trustworthy. But we keep pushing into the fringe of those spaces where they’re not.” Riedl does believe in the path to a purely agentic assistant. “I don’t know if we ever get to AGI, but I think over time we do see these things at least being more reliable.” The question for those of us dealing with these unreliable AIs in our homes today, however, is are we willing to wait and at what cost to the smart home in the meantime? Jennifer Pattison TuohyCloseJennifer Pattison TuohySenior Reviewer, Smart HomePosts from this author will be added to your daily email digest and your homepage feed. AICloseAIPosts from this topic will be added to your daily email digest and your homepage feed. AmazonCloseAmazonPosts from this topic will be added to your daily email digest and your homepage feed. Amazon AlexaCloseAmazon AlexaPosts from this topic will be added to your daily email digest and your homepage feed. GoogleCloseGooglePosts from this topic will be added to your daily email digest and your homepage feed. Google AssistantCloseGoogle AssistantPosts from this topic will be added to your daily email digest and your homepage feed. Smart HomeCloseSmart HomePosts from this topic will be added to your daily email digest and your homepage feed. TechCloseTechPosts from this topic will be added to your daily email digest and your homepage feed.

Advertisement

Ad slot: in-article

Source

This article was originally published by Jennifer Pattison Tuohy. Read the original at theverge.com

Emergent News aggregates and curates content from trusted sources to help you understand reality clearly.

Related Articles

AI-Powered Dating Is All Hype. IRL Cruising Is the Future
ai-pulse 6 min

AI-Powered Dating Is All Hype. IRL Cruising Is the Future

Dating apps and AI companies have been touting bot wingmen for months. But the future might just be good old-fashioned meet-cutes.

Dec 31, 2025 Read →
The Great Big Power Play AI illustration
ai-pulse 7 min

The Great Big Power Play

Molly Taft Science Dec 30, 2025 6:00 AM The Great Big Power Play US support for nuclear energy is soaring. Meanwhile, coal plants are on their way out and electricity-sucking data centers are meeting huge pushback. Welcome to the next front in the energy battle. PHOTO-ILLUSTRATION: WIRED STAFF; GETTY IMAGES Save Story Save this story Save Story Save this story Take yourself back to 2017. Get Out and The Shape of Water were playing in theaters, Zohran Mamdani was still known as rapper Young Cardamom, and the Trump administration, freshly in power, was eager to prop up its favored energy sources. That year, the administration introduced a series of subsidies for struggling coal-fired power plants and nuclear power plants, which were facing increasing price pressures from gas and cheap renewables. The plan would have put taxpayers on the hook for billions of dollars. It didn’t work. In subsequent years, the nuclear industry kept running into roadblocks. Three nuclear plants have shut down since 2020, while construction of two of the only four reactors started since 2000 was put on hold after a decade and billions of dollars following a political scandal . Coal, meanwhile, continued its long decline: It comprises just 17 percent of the US power mix , down from a high of 45 percent in 2010. Now, both of these energy sources are getting second chances. The difference this time is the buzz around AI , but it isn’t clear that the outcome will be much different. expired: canceling nuclear tired: canceling coal wired: the free market Read more Expired/Tired/WIRED 2025 stories here . Throughout 2025, the Trump administration has not just gone all in on promoting nuclear, but positioned it specifically as a solution to AI’s energy needs. In May, the president signed a series of executive orders intended to boost nuclear energy in the US, including ordering 10 new large reactors to be constructed by 2030. A pilot program at the Department of Energy created as a result of May’s executive orders—coupled with a serious reshuffling of the country’s nuclear regulator—has already led to breakthroughs from smaller startups. Energy secretary Chris Wright said in September that AI’s progress “will be accelerated by rapidly unlocking and deploying commercial nuclear power.” The administration’s push is mirrored by investments from tech companies. Giants like Google, Amazon, and Microsoft have inked numerous deals in recent years with nuclear companies to power data centers; Microsoft even joined the World Nuclear Association. Multiple retired reactors in the US are being considered for restarts—including two of the three that have closed in the past five years—with the tech industry supporting some of these arrangements . (This includes Microsoft’s high-profile restart of the infamous Three Mile Island, which is also being backed by a $1 billion loan from the federal government.) It’s a good time for both the private and public sectors to push nuclear: public support for nuclear power is the highest it’s been since 2010. Despite all of this, the practicalities of nuclear energy leave its future in doubt. Most of nuclear’s costs come not from onerous regulations but from construction . Critics are wary of juiced-up valuations for small modular reactor companies, especially those with deep connections to the Trump administration. An $80 billion deal the government struck with reactor giant Westinghouse in October is light on details, leaving more questions than answers for the industry. And despite high-profile tech deals that promise to get reactors up and running in a few years, the timelines remain tricky. Still, insiders say that this year marked a turning point. “Nuclear technology has been seen by proponents as the neglected and unjustly villainized hero of the energy world,” says Brett Rampal, a nuclear power expert who advises investors. “Now, full-throated support from the president, Congress, tech companies, and the common person feels like generational restitution and a return to meritocracy.” Nuclear isn’t the only form of energy that seems to be getting a second start thanks to AI. In April, President Trump signed a series of executive orders to boost US coal to power AI; Wright has since ordered two plants that were slated to be retired to stay online via emergency order. The administration has also scrambled to make it easier to run coal plants, in particular focusing on doing away with pollution regulation. These efforts—and the endless demand for energy from AI—may have extended a lifeline to coal: More than two dozen generating units that were scheduled to retire across the country are now staying online , separate from Wright’s order, with some getting yearslong reprieves. A complete recovery for the industry, however, is still an open question. A recent analysis of the US power sector finds that almost all of the 10 largest utilities in the US are significantly slashing their reliance on coal. (Many of these utilities, the analysis shows, have been looking to replace coal-fired power with more nuclear.) Part of what may keep coal on its downward track in the US—albeit with an extended lifeline—is simply its bad PR. The tech of the future, after all, isn’t supposed to pollute the air and drive temperatures up; while AI has significantly set Big Tech back from its climate-change goals, these companies are theoretically still committed to not frying the planet. And while tech giants are scrambling to align themselves with nuclear, which does not produce direct carbon emissions, no big companies have openly partnered with a struggling coal plant or splashed out a press release about how they’re seeking to produce more energy from coal. (Some retired coal plants are being proposed as sites for data centers, powered by natural gas.) Some companies are trying to develop technologies that would capture carbon emissions from coal plants, but outlook for those technologies is bearish following some high-profile failures. “Emissions [are] always going to factor into the discussion” for investors, says Rampal. The Oval Office playing favorites with energy sources doesn’t mean that it can defeat the market. Utility-scale solar and onshore wind remain some of the cheapest forms of energy around, even without government subsidies. And while Washington looks backward, other countries are continuing massive buildouts of renewable energy. China’s emissions have taken a nosedive over the past 18 months, thanks in large part to a huge expansion of renewable energy. Coal’s use in the power sector is declining due to competition from renewables, while nuclear made up only a small slice of total power use. If the administration’s goal is to defeat China on AI, it might want to start by taking a look at its energy playbook.

Dec 30, 2025 Read →
3 New Tricks to Try With Google Gemini Live After Its Latest Major Upgrade AI illustration
ai-pulse 5 min

3 New Tricks to Try With Google Gemini Live After Its Latest Major Upgrade

David Nield Gear Dec 29, 2025 6:00 AM 3 New Tricks to Try With Google Gemini Live After Its Latest Major Upgrade Google's AI is now even smarter, and more versatile. Photo-Illustration: Wired Staff; Getty Images Comment Loader Save Story Save this story Comment Loader Save Story Save this story Gemini Live is the more conversational, natural language way of interacting with the Google Gemini AI bot using your voice. The idea is you chat with it like you would chat with a friend, interruptions and all, even if the actual answers are the same as you'd get from typing your queries into Gemini as normal. Now, about a year and a half after its debut, Gemini Live has been given what Google is describing as its “biggest update ever.” The update makes the Gemini Live mode even more natural and even more conversational than before, with a better understanding of tone, nuance, pronunciation, and rhythm. There's no real visible indication that anything has changed, and often a lot of the responses will seem the same as before too. However, there are certain areas where you can tell the difference that the latest upgrade has made—and so here's how to make the most of the new and improved Gemini Live. The update is rolling out now for Gemini on Android and iOS . To access Gemini Live, launch the Gemini app, then tap the Live button in the lower right hand corner (it looks vaguely like a sound wave) and start talking. The Gemini Live interface. David Nield Hear Some Stories Gemini Live can now add more feeling and variation to its storytelling capabilities—which can be useful for history lessons, bedtimes for the children, and creative brainstorming. The AI will even add in different accents and tones where appropriate, to help you distinguish between the characters and scenes. One of Google's own examples for how this works best is to get Gemini Live to tell you the story of the Roman Empire from the perspective of Julius Caesar. It's a challenge for Gemini that requires some leaps in perspective and imagination, and to use tone and style appropriately in a way that Gemini Live should now be better at. You don't have to restrict yourself to Julius Caesar or the Roman Empire either. You could get Gemini Live to give you a retelling of Pride and Prejudice from the perspective of each different Bennett sister, for example, or have the AI spin up a tale of what life would have been like in your part of the world 100, 200, or 300 years ago. Learn Some Skills Another area where Gemini Live's new capabilities make a noticeable difference is in educating and explaining: You can get it to give you a crash course (or a longer tutorial) on any topic of your choosing, anything from the intricacies of human genetics to the best ways to clean a carpet. You can even get Gemini Live to teach you a language . The AI can now go at a pace to suit you, which is particularly useful when you're trying to learn something new. If you need Gemini Live to slow down, speed up, or repeat something, then just say so. If you've only got a certain amount of time spare, let Gemini know when you're chatting to it. As usual, be wary of AI hallucinations , and maybe don't trust that everything you hear is fully accurate or verified. If you're wanting to learn something like how to rewire the lighting in your home or fix a problematic car engine, double-check the guidance you're getting with other sources, but Gemini Live is at least a useful starting point. Test Some Accents One of the new skills that Gemini Live has with this latest update is the ability to speak in different accents. Perhaps you want the history of the Wild West spoken by a cowboy, or you need the intricacies of the British Royal Family explained by someone with an authentic London accent. Gemini Live can now handle these requests. This extends to the language learning mentioned above, because you can hear words and phrases spoken as they would be by native speakers—and then try to copy the pronunciation and phrasing. While Gemini Live doesn't cover every language and accent across the globe, it can access plenty of them. There are certain safeguards built into Gemini Live here, and your requests might get refused if you veer too close to derogatory uses of accents and speech, or if you're trying to impersonate real people. However, it's another fun way to test out the AI, and to get responses that are more varied and personalized.

Dec 29, 2025 Read →