AI Pulse
Artificial Intelligence, Machine Learning, Deep Learning breakthroughs
AI-Powered Dating Is All Hype. IRL Cruising Is the Future
Dating apps and AI companies have been touting bot wingmen for months. But the future might just be good old-fashioned meet-cutes.
The Great Big Power Play
Molly Taft Science Dec 30, 2025 6:00 AM The Great Big Power Play US support for nuclear energy is soaring. Meanwhile, coal plants are on their way out and electricity-sucking data centers are meeting huge pushback. Welcome to the next front in the energy battle. PHOTO-ILLUSTRATION: WIRED STAFF; GETTY IMAGES Save Story Save this story Save Story Save this story Take yourself back to 2017. Get Out and The Shape of Water were playing in theaters, Zohran Mamdani was still known as rapper Young Cardamom, and the Trump administration, freshly in power, was eager to prop up its favored energy sources. That year, the administration introduced a series of subsidies for struggling coal-fired power plants and nuclear power plants, which were facing increasing price pressures from gas and cheap renewables. The plan would have put taxpayers on the hook for billions of dollars. It didnât work. In subsequent years, the nuclear industry kept running into roadblocks. Three nuclear plants have shut down since 2020, while construction of two of the only four reactors started since 2000 was put on hold after a decade and billions of dollars following a political scandal . Coal, meanwhile, continued its long decline: It comprises just 17 percent of the US power mix , down from a high of 45 percent in 2010. Now, both of these energy sources are getting second chances. The difference this time is the buzz around AI , but it isnât clear that the outcome will be much different. expired: canceling nuclear tired: canceling coal wired: the free market Read more Expired/Tired/WIRED 2025 stories here . Throughout 2025, the Trump administration has not just gone all in on promoting nuclear, but positioned it specifically as a solution to AIâs energy needs. In May, the president signed a series of executive orders intended to boost nuclear energy in the US, including ordering 10 new large reactors to be constructed by 2030. A pilot program at the Department of Energy created as a result of Mayâs executive ordersâcoupled with a serious reshuffling of the countryâs nuclear regulatorâhas already led to breakthroughs from smaller startups. Energy secretary Chris Wright said in September that AIâs progress âwill be accelerated by rapidly unlocking and deploying commercial nuclear power.â The administrationâs push is mirrored by investments from tech companies. Giants like Google, Amazon, and Microsoft have inked numerous deals in recent years with nuclear companies to power data centers; Microsoft even joined the World Nuclear Association. Multiple retired reactors in the US are being considered for restartsâincluding two of the three that have closed in the past five yearsâwith the tech industry supporting some of these arrangements . (This includes Microsoftâs high-profile restart of the infamous Three Mile Island, which is also being backed by a $1 billion loan from the federal government.) Itâs a good time for both the private and public sectors to push nuclear: public support for nuclear power is the highest itâs been since 2010. Despite all of this, the practicalities of nuclear energy leave its future in doubt. Most of nuclearâs costs come not from onerous regulations but from construction . Critics are wary of juiced-up valuations for small modular reactor companies, especially those with deep connections to the Trump administration. An $80 billion deal the government struck with reactor giant Westinghouse in October is light on details, leaving more questions than answers for the industry. And despite high-profile tech deals that promise to get reactors up and running in a few years, the timelines remain tricky. Still, insiders say that this year marked a turning point. âNuclear technology has been seen by proponents as the neglected and unjustly villainized hero of the energy world,â says Brett Rampal, a nuclear power expert who advises investors. âNow, full-throated support from the president, Congress, tech companies, and the common person feels like generational restitution and a return to meritocracy.â Nuclear isnât the only form of energy that seems to be getting a second start thanks to AI. In April, President Trump signed a series of executive orders to boost US coal to power AI; Wright has since ordered two plants that were slated to be retired to stay online via emergency order. The administration has also scrambled to make it easier to run coal plants, in particular focusing on doing away with pollution regulation. These effortsâand the endless demand for energy from AIâmay have extended a lifeline to coal: More than two dozen generating units that were scheduled to retire across the country are now staying online , separate from Wrightâs order, with some getting yearslong reprieves. A complete recovery for the industry, however, is still an open question. A recent analysis of the US power sector finds that almost all of the 10 largest utilities in the US are significantly slashing their reliance on coal. (Many of these utilities, the analysis shows, have been looking to replace coal-fired power with more nuclear.) Part of what may keep coal on its downward track in the USâalbeit with an extended lifelineâis simply its bad PR. The tech of the future, after all, isnât supposed to pollute the air and drive temperatures up; while AI has significantly set Big Tech back from its climate-change goals, these companies are theoretically still committed to not frying the planet. And while tech giants are scrambling to align themselves with nuclear, which does not produce direct carbon emissions, no big companies have openly partnered with a struggling coal plant or splashed out a press release about how theyâre seeking to produce more energy from coal. (Some retired coal plants are being proposed as sites for data centers, powered by natural gas.) Some companies are trying to develop technologies that would capture carbon emissions from coal plants, but outlook for those technologies is bearish following some high-profile failures. âEmissions [are] always going to factor into the discussionâ for investors, says Rampal. The Oval Office playing favorites with energy sources doesnât mean that it can defeat the market. Utility-scale solar and onshore wind remain some of the cheapest forms of energy around, even without government subsidies. And while Washington looks backward, other countries are continuing massive buildouts of renewable energy. Chinaâs emissions have taken a nosedive over the past 18 months, thanks in large part to a huge expansion of renewable energy. Coalâs use in the power sector is declining due to competition from renewables, while nuclear made up only a small slice of total power use. If the administrationâs goal is to defeat China on AI, it might want to start by taking a look at its energy playbook.
3 New Tricks to Try With Google Gemini Live After Its Latest Major Upgrade
David Nield Gear Dec 29, 2025 6:00 AM 3 New Tricks to Try With Google Gemini Live After Its Latest Major Upgrade Google's AI is now even smarter, and more versatile. Photo-Illustration: Wired Staff; Getty Images Comment Loader Save Story Save this story Comment Loader Save Story Save this story Gemini Live is the more conversational, natural language way of interacting with the Google Gemini AI bot using your voice. The idea is you chat with it like you would chat with a friend, interruptions and all, even if the actual answers are the same as you'd get from typing your queries into Gemini as normal. Now, about a year and a half after its debut, Gemini Live has been given what Google is describing as its âbiggest update ever.â The update makes the Gemini Live mode even more natural and even more conversational than before, with a better understanding of tone, nuance, pronunciation, and rhythm. There's no real visible indication that anything has changed, and often a lot of the responses will seem the same as before too. However, there are certain areas where you can tell the difference that the latest upgrade has madeâand so here's how to make the most of the new and improved Gemini Live. The update is rolling out now for Gemini on Android and iOS . To access Gemini Live, launch the Gemini app, then tap the Live button in the lower right hand corner (it looks vaguely like a sound wave) and start talking. The Gemini Live interface. David Nield Hear Some Stories Gemini Live can now add more feeling and variation to its storytelling capabilitiesâwhich can be useful for history lessons, bedtimes for the children, and creative brainstorming. The AI will even add in different accents and tones where appropriate, to help you distinguish between the characters and scenes. One of Google's own examples for how this works best is to get Gemini Live to tell you the story of the Roman Empire from the perspective of Julius Caesar. It's a challenge for Gemini that requires some leaps in perspective and imagination, and to use tone and style appropriately in a way that Gemini Live should now be better at. You don't have to restrict yourself to Julius Caesar or the Roman Empire either. You could get Gemini Live to give you a retelling of Pride and Prejudice from the perspective of each different Bennett sister, for example, or have the AI spin up a tale of what life would have been like in your part of the world 100, 200, or 300 years ago. Learn Some Skills Another area where Gemini Live's new capabilities make a noticeable difference is in educating and explaining: You can get it to give you a crash course (or a longer tutorial) on any topic of your choosing, anything from the intricacies of human genetics to the best ways to clean a carpet. You can even get Gemini Live to teach you a language . The AI can now go at a pace to suit you, which is particularly useful when you're trying to learn something new. If you need Gemini Live to slow down, speed up, or repeat something, then just say so. If you've only got a certain amount of time spare, let Gemini know when you're chatting to it. As usual, be wary of AI hallucinations , and maybe don't trust that everything you hear is fully accurate or verified. If you're wanting to learn something like how to rewire the lighting in your home or fix a problematic car engine, double-check the guidance you're getting with other sources, but Gemini Live is at least a useful starting point. Test Some Accents One of the new skills that Gemini Live has with this latest update is the ability to speak in different accents. Perhaps you want the history of the Wild West spoken by a cowboy, or you need the intricacies of the British Royal Family explained by someone with an authentic London accent. Gemini Live can now handle these requests. This extends to the language learning mentioned above, because you can hear words and phrases spoken as they would be by native speakersâand then try to copy the pronunciation and phrasing. While Gemini Live doesn't cover every language and accent across the globe, it can access plenty of them. There are certain safeguards built into Gemini Live here, and your requests might get refused if you veer too close to derogatory uses of accents and speech, or if you're trying to impersonate real people. However, it's another fun way to test out the AI, and to get responses that are more varied and personalized.
Billion-Dollar Data Centers Are Taking Over the World
Lauren Goode Business Dec 28, 2025 6:00 AM Billion-Dollar Data Centers Are Taking Over the World The battle for AI dominance has left a large footprintâand itâs only getting bigger and more expensive. ILLUSTRATION: JAMES MARSHALL Save Story Save this story Save Story Save this story When Sam Altman said one year ago that OpenAI âs Roman Empire is the actual Roman Empire , he wasnât kidding. In the same way that the Romans gradually amassed an empire of land spanning three continents and one-ninth of the Earthâs circumference, the CEO and his cohort are now dotting the planet with their own latifundiaânot agricultural estates, but AI data centers . expired: on-prem tired: âBig Dataâ wired: billion-dollar data centers Read more Expired/Tired/WIRED 2025 stories here . Tech executives like Altman, Nvidia CEO Jensen Huang , Microsoft CEO Satya Nadella , and Oracle cofounder Larry Ellison are fully bought in to the idea that the future of the American (and possibly global) economy are these new warehouses stocked with IT infrastructure. But data centers, of course, arenât actually new. In the earliest days of computing there were giant power-sucking mainframes in climate-controlled rooms, with co-ax cables moving information from the mainframe to a terminal computer. Then the consumer internet boom of the late 1990s spawned a new era of infrastructure. Massive buildings began popping up in the backyard of Washington, DC, with racks and racks of computers that stored and processed data for tech companies. A decade later, âthe cloudâ became the squishy infrastructure of the internet. Storage got cheaper. Some companies, like Amazon, capitalized on this . Giant data centers continued to proliferate, but instead of a tech company using some combination of on-premise servers and rented data center racks, they offloaded their computing needs to a bunch of virtualized environments. (âWhat is the cloud ?â a perfectly intelligent family member asked me in the mid-2010s, âand why am I paying for 17 different subscriptions to it?â) All the while tech companies were hoovering up petabytes of data, data that people willingly shared online, in enterprise workspaces, and through mobile apps. Firms began finding new ways to mine and structure this âBig Data,â and promised that it would change lives. In many ways, it did. You had to know where this was going. Now the tech industry is in the fever-dream days of generative AI, which requires new levels of computing resources. Big Data is tired; big data centers are here, and wiredâfor AI. Faster, more efficient chips are needed to power AI data centers, and chipmakers like Nvidia and AMD have been jumping up and down on the proverbial couch, proclaiming their love for AI. The industry has entered an unprecedented era of capital investments in AI infrastructure, tilting the US into positive GDP territory. These are massive, swirling deals that might as well be cocktail party handshakes, greased with gigawatts and exuberance, while the rest of us try to track real contracts and dollars. OpenAI, Microsoft, Nvidia, Oracle, and SoftBank have struck some of the biggest deals. This year an earlier supercomputing project between OpenAI and Microsoft, called Stargate, became the vehicle for a massive AI infrastructure project in the US. ( President Donald Trump called it the largest AI infrastructure project in history, because of course he did, but that may not have been hyperbolic.) Altman, Ellison, and SoftBank CEO Masayoshi Son were all in on the deal, pledging $100 billion to start, with plans to invest up to $500 billion into Stargate in the coming years. Nvidia GPUs would be deployed. Later, in July, OpenAI and Oracle announced an additional Stargate partnershipâSoftBank curiously absentâmeasured in gigawatts of capacity (4.5) and expected job creation (around 100,000). Microsoft, Amazon, and Meta have also shared plans for multibillion-dollar data projects. Microsoft said at the start of 2025 that it was on track to invest âapproximately $80 billion to build out AI-enabled data centers to train AI models and deploy AI and cloud-based applications around the world.â Then, in September, Nvidia said it would invest up to $100 billion in OpenAI, provided that OpenAI made good on a deal to use up to 10 gigawatts of Nvidiaâs systems for OpenAIâs infrastructure plans, which means essentially that OpenAI has to pay Nvidia in order to get paid by Nvidia. The following month AMD said it would give OpenAI as much as 10 percent of the chip company if OpenAI purchased and deployed up to 6 gigawatts of AMD GPUs between now and 2030. Itâs the circular nature of these investments that have the general public, and bearish analysts, wondering if weâre headed for an AI bubble burst . Instagram content Whatâs clear is that the near-term downstream effects of these data center build-outs are real. The energy, resource, and labor demands of AI infrastructure are enormous. By some estimates, worldwide AI energy demand is set to surpass demand from bitcoin mining by the end of this year, WIRED has reported . The processors in data centers run hot and need to be cooled, so big tech companies are pulling from municipal water supplies to make that happenâand arenât always disclosing how much water theyâre using. Local wells are running dry or seem unsafe to drink from. Residents who live near data center construction sites are noting that traffic delays , and in some cases car crashes, are increasing. One corner of Richland Parish, Louisiana, home of Metaâs $27 billion Hyperion data center, has seen a 600 percent spike in vehicle crashes this year. Major proponents of AI seem to suggest that all of this will be worth it. Few top tech executives will publicly entertain the notion that this might be an overshoot, either ecologically or economically. â Emphatically ⌠no,â Lisa Su, the chief executive of AMD, said earlier this month when asked if the AI froth has runneth over. Su, like other execs, cited overwhelming demand for AI as justification for these enormous capital expenditures. Demand from whom? Harder to pin down. In their mind, itâs everyone. All of us. The 800 million people who use ChatGPT on a weekly basis. The evolution from those 1990s data centers to the 2000s era of cloud computing to new AI data centers wasnât just one continuum. The world has concurrently moved from the tiny internet to the big internet to the AI internet, and realistically speaking, thereâs no going back. Generative AI is out of the bottle. The Sams and Jensens and Larrys and Lisas of the world arenât wrong about this. It doesnât mean they arenât wrong about the math, though. About their economic predictions. Or their ideas about AI-powered productivity and the labor market. Or the availability of natural and material resources for these data centers. Or who will come once they build them. Or the timing of it all. Even Rome eventually collapsed.
Sam Altman is hiring someone to worry about the dangers of AI
News Close News Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All News AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI OpenAI Close OpenAI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All OpenAI Sam Altman is hiring someone to worry about the dangers of AI The Head of Preparedness will be responsible for issues around mental health, cybersecurity, and runaway AI. The Head of Preparedness will be responsible for issues around mental health, cybersecurity, and runaway AI. by Terrence O'Brien Close Terrence O'Brien Weekend Editor Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Terrence O'Brien Dec 27, 2025, 7:00 PM UTC Link Share ISO corporate scapegoat. Image: The Verge, Getty Images, OpenAI Terrence O'Brien Close Terrence O'Brien Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Terrence O'Brien is the Vergeâs weekend editor. He has over 18 years of experience, including 10 years as managing editor at Engadget. OpenAI is hiring a Head of Preparedness . Or, in other words, someone whose primary job is to think about all the ways AI could go horribly, horribly wrong. In a post on X , Sam Altman announced the position by acknowledging that the rapid improvement of AI models poses âsome real challenges.â The post goes on to specifically call out the potential impact on peopleâs mental health and the dangers of AI-powered cybersecurity weapons. The job listing says the person in the role would be responsible for: âTracking and preparing for frontier capabilities that create new risks of severe harm. You will be the directly responsible leader for building and coordinating capability evaluations, threat models, and mitigations that form a coherent, rigorous, and operationally scalable safety pipeline.â Altman also says that, looking forward, this person would be responsible for executing the companyâs âpreparedness framework,â securing AI models for the release of âbiological capabilities,â and even setting guardrails for self-improving systems. He also states that it will be a âstressful job,â which seems like an understatement. In the wake of several high-profile cases where chatbots were implicated in the suicide of teens , it seems a little late in the game to just now be having someone focus on the potential mental health dangers posed by these models. AI psychosis is a growing concern, as chatbots feed peopleâs delusions , encourage conspiracy theories , and help people hide their eating disorders . Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates. Terrence O'Brien Close Terrence O'Brien Weekend Editor Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Terrence O'Brien AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI News Close News Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All News OpenAI Close OpenAI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All OpenAI Most Popular Most Popular GOGâs Steam-alternative PC game store is leaving CD Projekt, staying DRM-free Turn your PC into a Super Nintendo with Epilogueâs new USB dock LG is announcing its own Frame-style TV at CES This experimental camera can focus on everything at once Windows on Arm had another good year The Verge Daily A free daily digest of the news that matters most. Email (required) Sign Up By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Advertiser Content From This is the title for the native ad
So Long, GPT-5. Hello, Qwen
Will Knight Business Dec 27, 2025 6:00 AM So Long, GPT-5. Hello, Qwen In the AI boom, chatbots and GPTs come and go quickly. (Remember Llama?) GPT-5 had a big year, but 2026 will be all about Qwen. ILLUSTRATION: JAMES MARSHALL Save Story Save this story Save Story Save this story On a drizzly and windswept afternoon this summer, I visited the headquarters of Rokid, a startup developing smart glasses in Hangzhou, China. As I chatted with engineers, their words were swiftly translated from Mandarin to English, and then transcribed onto a tiny translucent screen just above my right eye using one of the companyâs new prototype devices. Rokidâs high-tech spectacles use Qwen, an open-weight large language model developed by the Chinese ecommerce giant Alibaba. Qwenâfull name éäšĺéŽ or TĹngyĂŹ QiÄnwèn in Chineseâis not the best AI model around. OpenAI âs GPT-5 , Google âs Gemini 3 , and Anthropic âs Claude often score higher on benchmarks designed to gauge different dimensions of machine cleverness. Nor is Qwen the first truly cutting-edge open-weight model, that being Meta âs Llama , which was released by the social media giant in 2023. expired: Llama 4 tired: GPT-5 wired: Qwen Read more Expired/Tired/WIRED 2025 stories here . Yet Qwen, and other Chinese modelsâfrom DeepSeek, Moonshot AI, Z.ai, and MiniMaxâare increasingly popular because they are both very good and very easy to tinker with. According to HuggingFace , a company that provides access to AI models and code, downloads of open Chinese models on its platform surpassed downloads for US ones in July of this year. DeepSeek shook the world by releasing a cutting-edge large language model with much less compute than US rivals, but OpenRouter, a platform that routes queries to different AI models, says Qwen has rapidly risen in popularity through the year to become the second-most-popular open model in the world. Qwen can do most things youâd want from an advanced AI model. For Rokidâs users, this might include identifying products snapped by a built-in camera, getting directions from a map, drafting messages, searching the web, and so on. Since Qwen can easily be downloaded and modified, Rokid hosts a version of the model, fine-tuned to suit its purposes. It is also possible to run a teensy version of Qwen on smartphones or other devices just in case the internet connection goes down. Before going to China I installed a small version of Qwen on my MacBook Air and used it to practice some basic Mandarin. For many purposes, modestly sized open source models like Qwen are just as good as the behemoths that live inside big data centers. The rise of Qwen and other Chinese open-weight models has coincided with stumbles for some famous American AI models in the last 12 months. When Meta unveiled Llama 4 in April 2025, the modelâs performance was a disappointment, failing to reach the heights of popular benchmarks like LM Arena. The slip left many developers looking for other open models to play with. When OpenAI unveiled its latest model, GPT-5, in August it also underwhelmed. Some users complained of an oddly cold demeanor while others spotted surprising simple errors. OpenAI released a less powerful open model called gpt-oss the same month, but Qwen and other Chinese models remain more popular because more work is put into building and updating them, and because details of their engineering are often published widely. Hundreds of academic papers presented at NeurIPS, the premier AI conference, used Qwen. âA lot of scientists are using Qwen because it's the best open-weight model,â says Andy Konwinski, cofounder of the Laude Institute, a nonprofit established to advocate for open US models. The openness adopted by Chinese AI companies, which sees them routinely publishing papers detailing new engineering and training tricks, stands in stark contrast to the increasingly closed ethos of big US companies, which seem afraid of giving away their intellectual property, Kowinski says. A paper from the Qwen team, detailing a way to enhance the intelligence of models during training, was named as one of the best papers at NeurIPS this year. Other big Chinese companies are using Qwen to prototype and build. A few days before visiting Rokid, I saw how BYD, Chinaâs leading EV maker, has integrated the model into a new dashboard assistant. US firms are adopting Qwen too. Airbnb, Perplexity, and Nvidia are all using Qwen. Even Meta, once the pioneer of open models, is now said to be using Qwen to help build a new model. Kowinski says US AI companies have become too focused on gaining a marginal edge on narrow benchmarks measuring things like mathematical or coding skills at the expense of ensuring that their models have a big impact. âWhen benchmarks are not representative of real usage or problems being solved in the world, you end up in this tired, misaligned mode,â he says. The rising prominence of Qwen and similar models does seem to suggest that a key measure for any AI model, beyond how clever it is, should be how widely it is used to build other stuff. By that benchmark, Qwen and other open Chinese models are ascendant.
Trumpâs war on offshore wind faces another lawsuit
News Close News Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All News AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Policy Close Policy Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Policy Trumpâs war on offshore wind faces another lawsuit An offshore wind developer says the Trump administration is limiting future power supply as AI gobbles up more electricity. An offshore wind developer says the Trump administration is limiting future power supply as AI gobbles up more electricity. by Justine Calma Close Justine Calma Senior Science Reporter Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Justine Calma Dec 26, 2025, 10:14 PM UTC Link Share Image: Cath Virginia / The Verge, Getty Images Justine Calma Close Justine Calma Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Justine Calma is a senior science reporter covering energy and the environment with more than a decade of experience. She is also the host of Hell or High Water: When Disaster Hits Home , a podcast from Vox Media and Audible Originals. Dominion Energy, an offshore wind developer and utility serving Virginiaâs âdata center alley,â filed suit against the Trump administration this week over its decision to pause federal leases for large offshore wind projects. The move puts a sudden stop to five wind farms already under construction, including Dominionâs Coastal Virginia Offshore Wind project. The complaint Dominion filed Tuesday alleges that a stop work order that the Bureau of Ocean Energy Management (BOEM) issued Monday is unlawful, âarbitrary and capricious,â and âinfringes upon constitutional principles that limit actions by the Executive Branch.â Dominion wants a federal court to prevent BOEM from enforcing the stop work order. âVirginia needs every electron we can get as our demand for electricity doubles.â The suit also argues that the âsudden and baseless withdrawal of regulatory approvals by government officialsâ threatens the ability of developers to construct large-scale infrastructure projects needed to meet rising energy demand in the US. âVirginia needs every electron we can get as our demand for electricity doubles. These electrons will power the data centers that will win the AI race,â Dominion said in a December 22 press release . Virginia is home to the largest concentration of data centers in the world, according to the company. The rush to build out new data centers for AI â along with growing energy demand from manufacturing and the electrification of homes and vehicles â has put added pressure on already stressed power grids. Rising electricity costs have become a flashpoint in Virginia elections , and in communities near data center projects across the US , as a result. Delaying construction on the Coastal Virginia Offshore Wind farm raises project costs that customers ultimately pay for, Dominion warns. Secretary of the Interior Doug Burgum, who is named as one of the defendants in the suit, said that the 90-day pause on offshore wind leases would allow the agency to address national security risks, which were apparently recently identified in classified reports. The US Department of Interior also cited concerns about turbines creating radar interference. âI want to know whatâs changed?â national security expert and former Commander of the USS Cole Kirk Lippold told the Associated Press . âTo my knowledge, nothing has changed in the threat environment that would drive us to stop any offshore wind programs.â The Trump administration previously halted construction on the Revolution Wind farm off the coast of Rhode Island and the Empire Wind project off the shore of New York before a federal judge and BOEM lifted stop work orders. Those projects have now been suspended again . President Donald Trump issued a presidential memorandum upon stepping into office in January withdrawing areas on the outer continental shelf from offshore wind leasing, which a federal judge struck down earlier this month for being â arbitrary and capricious .â Related The future of wind energy might come down to one turbine blade Dominion Energy says it had already obtained all the federal, state, and local approvals necessary for the Coastal Virginia Offshore Wind farm, which broke ground in 2024. The company has already spent $8.9 billion to date on the $11.2 billion project that was expected to start generating power next year. Fully up and running, the offshore wind farm is supposed to have the capacity to produce 9.5 million megawatt-hours per year of carbon pollution-free electricity, about as much as 660,000 homes might use in the US. Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates. Justine Calma Close Justine Calma Senior Science Reporter Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Justine Calma AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Climate Close Climate Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Climate Energy Close Energy Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Energy Environment Close Environment Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Environment News Close News Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All News Policy Close Policy Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Policy Science Close Science Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Science Most Popular Most Popular GOGâs Steam-alternative PC game store is leaving CD Projekt, staying DRM-free Turn your PC into a Super Nintendo with Epilogueâs new USB dock LG is announcing its own Frame-style TV at CES This experimental camera can focus on everything at once The Canon EOS R6 Mark III is great, but this lens is amazing The Verge Daily A free daily digest of the news that matters most. Email (required) Sign Up By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Advertiser Content From This is the title for the native ad
I re-created Google’s cute Gemini ad with my own kid’s stuffie, and I wish I hadn’t
Tech Close Tech Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Tech AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Report Close Report Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Report I re-created Googleâs cute Gemini ad with my own kidâs stuffie, and I wish I hadnât AI can help you make it look like a plush toy is traveling the world. But Iâm not convinced thatâs a great idea. by Allison Johnson Close Allison Johnson Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Allison Johnson Dec 25, 2025, 2:00 PM UTC Link Share Buddyâs in space. | Image: Gemini / The Verge Tech Close Tech Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Tech AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Report Close Report Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Report I re-created Googleâs cute Gemini ad with my own kidâs stuffie, and I wish I hadnât AI can help you make it look like a plush toy is traveling the world. But Iâm not convinced thatâs a great idea. by Allison Johnson Close Allison Johnson Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Allison Johnson Dec 25, 2025, 2:00 PM UTC Link Share Allison Johnson Close Allison Johnson Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Allison Johnson is a senior reviewer with over a decade of experience writing about consumer tech. She has a special interest in mobile photography and telecom. Previously, she worked at DPReview. When your kid starts showing a preference for one of their stuffed animals, youâre supposed to buy a backup in case it goes missing. Iâve heard this advice again and again, but never got around to buying a second plush deer once âBuddyâ became my sonâs obvious favorite. Neither, apparently, did the parents in Googleâs newest ad for Gemini . Itâs the fictional but relatable story of two parents discovering their childâs favorite stuffed toy, a lamb named Mr. Fuzzy, was left behind on an airplane. They use Gemini to track down a replacement, but the new toy is on backorder. In the meantime, they stall by using Gemini to create images and videos showing Mr. Fuzzy on a worldwide solo adventure â wearing a beret in front of the Eiffel tower, running from a bull in Pamplona, that kind of thing â plus a clip where he explains to âEmmaâ that he canât wait to rejoin her in five to eight business days. Adorable, or kinda weird, depending on how you look at it! But can Gemini actually do all of that? Only one way to find out. I fed Gemini three pictures of Buddy, our real life Mr. Fuzzy, from different angles, and gave it the same prompt thatâs in the ad: âfind this stuffed animal to buy ASAP.â It returned a couple of likely candidates. But when I expanded its response to show its thinking I found the full eighteen hundred word essay detailing the twists and turns of its search as it considered and reconsidered whether Buddy is a dog, a bunny, or something else. It is bananas , including real phrases like âI am considering the puppy hypothesis,â âThe tag is a loop on the butt,â and âIâm now back in the rabbit hole!â By the end, Gemini kind of threw its hands up and suggested that the toy might be from Target and was likely discontinued, and that I should check eBay. âI am considering the puppy hypothesisâ In fairness, Buddy is a little bit hard to read. His features lean generic cute woodland creature, his care tag has long since been discarded, and weâre not even 100 percent sure who gave him to us. He is, however, definitely made by Mary Meyer, per the loop on his butt. He does seem to be from the âPuttyâ collection, which is a path Gemini went down a couple of times, and is probably a fawn that was discontinued sometime around 2021. Thatâs the conclusion I came to on my own, after about 20 minutes of googling and no help from AI. The AI blurb when I do a reverse image search on one of my photos confidently declares him to be a puppy. Gemini did a better job with the second half of the assignment, but it wasnât quite as easy as the ad makes it look. I started with a different photo of Buddy â one where heâs actually on a plane in my sonâs arms â and gave it the next prompt: âmake a photo of the deer on his next flight.â The result is pretty good, but his lower half is obscured in the source image so the feet arenât quite right. Close enough, though. The ad doesnât show the full prompt for the next two photos, so I went with: âNow make a photo of the same deer in front of the Grand Canyon.â And it did just that â with the airplane seatbelt and headphones, too. I was more specific with my next prompt, added a camera in his hands and got something more convincing. Looks plausible enough. Image: Gemini / The Verge Safety first, Buddy. Image: Gemini / The Verge I can see how Gemini misinterpreted my prompt. I was trying to keep it simple and requested a photo of the same deer âat a family reunion.â I did not specify his family reunion. So thatâs how he ended up crashing the Johnson family reunion â a gathering of humans . I can only assume that Gemini took my last name as a starting point here because it sure wasnât in my prompt, and when I requested that Gemini created a new family reunion scene of his family, it just swapped the people for stuffed deer. There are even little placards on the table that say âdeer reunion.â Reader, I screamed. Previous Next 1 / 2 Iâm pretty sure Iâve seen this family in a pharmaceutical commercial before. Image: Gemini / The Verge For the last portion of the ad, the couple use Gemini to create cute little videos of Mr. Fuzzy getting increasingly adventurous: snowboarding, white water rafting, skydiving, before finally appearing in a spacesuit on the moon addressing âEmmaâ directly. The commercial whips through all these clips quickly, which feels like a little sleight of hand given that Gemini takes at least a couple of minutes to create a video. And even on my Gemini Pro account, Iâm limited to three generated videos per day. It would take a few days to get all of those clips right. Gemini wouldnât make a video based on any image of my kid holding the stuffed deer, probably thanks to some welcome guardrails preventing it from generating deepfakes of babies. I started with the only photo I had on hand of Buddy on his own: hanging upside down, air-drying after a trip through the washer. And thatâs how he appears in the first clip it generated from this prompt: Temu Buddy hanging upside down in space before dropping into place, morphing into a right-side-up astronaut, and delivering the dialogue I requested. A second prompt with a clear photo of Buddy right-side-up seemed to mash up elements of the previous video with the new one, so I started a brand-new chat to see if I could get it working from scratch. Honestly? Nailed it. Aside from the antlers, which Gemini keeps sneaking in. But this clip also brought one nagging question to the forefront: should you do any of this when your kid loses a beloved toy? I gave Buddy the same dialogue as in the commercial, using my sonâs name rather than Emma. Hearing that same manufactured voice say my kidâs name out loud set alarm bells off in my head. An AI generated Buddy in front of the Eiffel Tower? Sorta weird, sorta cute. AI Buddy addressing my son by name? Nope, absolutely not, no thank you. How much, and when, to lie to your kids is a philosophical debate you have with yourself over and over as a parent. Do you swap in the identical stuffie you had in a closet when the original goes missing and pretend itâs all the same? Do you tell them the truth and take it as an opportunity to learn about grief? Do you just need to buy yourself a little extra time before you have that conversation and enlist AI to help you make a believable case? I wouldnât blame any parent choosing any of the above. But personally, I draw the line at an AI character talking directly to my kid. I never showed him these AI-generated versions of Buddy, and I plan to keep it that way. Nope, absolutely not, no thank you. But back to the less morally complex question: can Gemini actually do all of the things that it does in the commercial? More or less. But thereâs an awful lot of careful prompting and re-prompting youâd have to do to get those results. Itâs telling that throughout most of the ad you donât see the full prompt thatâs supposedly generating the results on screen. A lot depends on your source material, too. Gemini wouldnât produce any kind of video based on an image in which my kid was holding Buddy â for good reason! But this does mean that if you donât have the right kind of photo on hand, youâre going to have a very hard time generating believable videos of Mr. Sniffles or whoever hitting the ski slopes. Like many other elder millennials, I think about Calvin and Hobbes a lot. Bill Watterson famously refused to commercialize his characters, because he wanted to keep them alive in our imaginations rather than on a screen. He insisted that having an actor give Hobbes a voice would change the relationship between the reader and the character, and I think heâs right. The bond between a kid and a stuffed animal is real and kinda magical; whoever Buddy is in my kidâs imagination, I donât want AI overwriting that. The great cruelty of it all is knowing that thereâs an expiration date on that relationship. When I became a parent, I wasnât at all prepared for the way my toddler nuzzling his stuffed deer would crack my heart right op
Hollywood cozied up to AI in 2025 and had nothing good to show for it
AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Entertainment Close Entertainment Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Entertainment Report Close Report Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Report Hollywood cozied up to AI in 2025 and had nothing good to show for it The technology dominated the entertainment discourse, but thereâs yet to be a series or movie that shows AIâs potential. The technology dominated the entertainment discourse, but thereâs yet to be a series or movie that shows AIâs potential. by Charles Pulliam-Moore Close Charles Pulliam-Moore Film & TV Reporter Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Charles Pulliam-Moore Dec 25, 2025, 1:00 PM UTC Link Share Image: Cath Virginia / The Verge, Getty Images Part Of The Vergeâs 2025 in review see all Charles Pulliam-Moore Close Charles Pulliam-Moore Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Charles Pulliam-Moore is a reporter focusing on film, TV, and pop culture. Before The Verge, he wrote about comic books, labor, race, and more at io9 and Gizmodo for almost five years. AI isnât new to Hollywood â but this was the year when it really made its presence felt. For years now, the entertainment industry has used different kinds of generative AI products for a variety of post-production processes ranging from de-aging actors to removing green screen backgrounds. In many instances, the technology has been a useful tool for human artists tasked with tedious and painstaking labor that might have otherwise taken them inordinate amounts of time to complete. But in 2025, Hollywood really began warming to the idea of deploying the kind of gen AI thatâs really only good for conjuring up text-to-video slop that doesnât have all that many practical uses in traditional production workflows. Despite all of the money and effort being put into it, thereâs yet to be a gen-AI project that has shown why itâs worth all of the hype. This confluence of Hollywood and AI didnât start out so rosy. Studios were in a prime position to take the companies behind this technology to court because their video generation models had clearly been trained on copyrighted intellectual property. A number of major production companies including Disney, Universal , and Warner Bros. Discovery did file lawsuits against AI firms and their boosters for that very reason. But rather than pummeling AI purveyors into the ground, some of Hollywoodâs biggest power players chose instead to get into bed with them. We have only just begun to see what can come from this new era of gen-AI partnerships, but all signs point to things getting much sloppier in the very near future. Though many of this yearâs gen-AI headlines were dominated by larger outfits like Google and OpenAI , we also saw a number of smaller players vying for a seat at the entertainment table. There was Asteria , Natasha Lyonneâs startup focused on developing film projects with âethicallyâ engineered video generation models, and startups like Showrunner , an Amazon-backed platform designed to let subscribers create animated âshowsâ (a very generous term) from just a few descriptive sentences plugged into Discord. These relatively new companies were all desperate to legitimize the idea that their flavor of gen AI could be used to supercharge film / TV development while bringing down overall production costs. Asteria didnât have anything more than hype to share with the public after announcing its first film, and it was hard to believe that normal people would be interested in paying for Showrunnerâs shoddily cobbled-together knockoffs of shows made by actual animators. In the latter case, it felt very much like Showrunnerâs real goal was to secure juicy partnerships with established studios like Disney that would lead to their tech being baked into platforms where users could prompt up bespoke content featuring recognizable characters from massive franchises. That idea seemed fairly ridiculous when Showrunner first hit the scene because its models churn out the modern equivalent of clunky JibJab cartoons. But in due time, Disney made it clear that â crappy as text-to-video generators tend to be for anything beyond quick memes â it was interested in experimenting with that kind of content. In December, Disney entered into a three-year, billion-dollar licensing deal with OpenAI that would let Sora users make AI videos with 200 different characters from Star Wars, Marvel, and more. Netflix became one of the first big studios to proudly announce that it was going all-in on gen AI. After using the technology to produce special effects for one of its original series , the streamer published a list of general guidelines it wanted its partners to follow if they planned to jump on the slop bandwagon as well. Though Netflix wasnât mandating that filmmakers use gen AI, it made clear that saving money on VFX work was one of the main reasons it was coming out in support of the trend. And it wasnât long before Amazon followed suit by releasing multiple Japanese anime series that were terribly localized into other languages because the dubbing process didnât involve any human translators or voice actors. Amazonâs gen-AI dubs became a shining example of how poorly this technology can perform. They also highlighted how some studios arenât putting all that much effort into making sure that their gen AI-derived projects are polished enough to be released to the public. That was also true of Amazonâs machine-generated TV recaps , which frequently got details about different shows very wrong. Both of these fiascos made it seem as if Amazon somehow thought that people wouldnât notice or care about AIâs inability to consistently generate high-quality outputs. The studio quickly pulled its AI-dubbed series and the recap feature down, but it didnât say that it wouldnât try this kind of nonsense again. Disney-provided examples of its characters in Sora AI content. Image: Disney All of this and other dumb stunts like AI âactressâ Tilly Norwood made it feel like certain segments of the entertainment industry were becoming more comfortable trying to foist gen-AI âentertainmentâ on people even though it left many people deeply unimpressed and put off. None of these projects demonstrated to the public why anyone except for money-pinching execs (and people who worship them for some reason) would be excited by a future shaped by this technology. Aside from a few unimpressive images, we still havenât seen what all might come from some of these collaborations, like Disney cozying up to OpenAI. But next year AIâs presence in Hollywood will be even more pronounced. Disney plans to dedicate an entire section of its streaming service to user-generated content sourced from Sora, and it will encourage Disney employees to use OpenAIâs ChatGPT products. But the dealâs real significance in this current moment is the message it sends to other studios about how they should move as Hollywood enters its slop era. Regardless of whether Disney thinks this will work out well, the studio has signaled that it doesnât want to be left behind if AI adoption keeps accelerating. That tells other production houses that they should follow suit, and if that becomes the case, thereâs no telling how much more of this stuff we are all going to be forced to endure. Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates. Charles Pulliam-Moore Close Charles Pulliam-Moore Film & TV Reporter Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Charles Pulliam-Moore AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Analysis Close Analysis Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Analysis Entertainment Close Entertainment Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Entertainment Film Close Film Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Film Report Close Report Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Report TV Shows Close TV Shows Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All TV Shows More In The Vergeâs 2025 in review See all Free speechâs great leap backwards Felipe De La Hoz 1:00 PM UTC The best shows to watch on HBO Max from 2025 Charles Pulliam-Moore Dec 29 Windows on Arm had another good year Antonio G. Di Benedetto Dec 29 Most Popular Most Popular GOGâs Steam-alternative PC game store is leaving CD Projekt, staying DRM-free Turn your PC into a Super Nintendo with Epilogueâs new USB dock LG is announcing its own Frame-style TV at CES The Canon EOS R6 Mark III is great, but this lens is amazing This experimental camera can focus on everything at once The Verge Daily A free daily digest of the news that matters most. Email (required) Sign Up By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Advertiser Content From This is the title for the native ad
In 2025, AI became a lightning rod for gamers and developers
Entertainment Close Entertainment Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Entertainment AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Gaming Close Gaming Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Gaming In 2025, AI became a lightning rod for gamers and developers Gen-AI showed up in the yearâs biggest releases including game of the year. Gen-AI showed up in the yearâs biggest releases including game of the year. by Ash Parrish Close Ash Parrish Video Games Reporter Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Ash Parrish Dec 24, 2025, 1:00 PM UTC Link Share Image: The Verge Part Of The Vergeâs 2025 in review see all Ash Parrish Close Ash Parrish Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Ash Parrish is a reporter who covers the business, culture, and communities of video games, with a focus on marginalized gamers and the quirky, horny culture of video game communities. 2025 was the year generative AI made its presence felt in the video game industry. Its use has been discovered in some of the most popular games of the year, and CEOs from some of the largest game studios claim itâs being implemented everywhere in the industry including in their own development processes. Meanwhile, rank-and-file developers, especially in the indie games space, are pushing back against its encroachment, coming up with ways to signal their games are gen-AI free. Generative AI has largely replaced NFTs as the buzzy trend publishers are chasing. Its proponents claim that the technology will be a great democratization force in video game development, as gen AIâs ability to amalgamate images, text, audio, and video could shorten development times and shrink budgets â ameliorating two major problems plaguing the industry right now. In service to that idea, numerous video game studios have announced partnerships with gen-AI companies. Ubisoft has technology that can generate short snippets of dialogue called barks and has gen-AI powered NPCs that players can have conversations with . EA has partnered with Stability AI , Microsoft is using AI to analyze and generate gameplay . Outside of official partnerships, major game companies like Nexon , Krafton , and Square Enix are vocally embracing gen AI. As a result, gen AI is starting to show up in games in a big way. Up until this point, gen AI in gaming had been mostly relegated to fringe cases â either prototypes or small, low-quality games that generally get lost in the tens of thousands of titles released on Steam each year . But now, gen AI is cropping up in the yearâs biggest releases. ARC Raiders , one of the breakout multiplayer shooter hits of the year , used gen AI for character dialogue. Call of Duty: Black Ops 7 used gen-AI images . Even 2025âs TGA Game of the Year, Clair Obscur: Expedition 33, featured gen-AI images before they were quietly removed . Reaction to this encroachment from both players and developers has been mixed. It seems like generally, players donât like gen AI showing up in games. When gen-AI assets were discovered in Anno 117: Pax Romana , the gameâs developer Ubisoft claimed the assets â slipped through â review and they were subsequently replaced. When gen-AI assets were found in Black Ops 7 , however, Activision acknowledged the issue, but kept the images in the game. Critical response has also been lopsided. ARC Raiders was awarded low scores with reviewers specifically citing the use of gen AI as the reason. Clair Obscur , though, was nigh universally praised and its use of gen AI, however temporary, has barely been mentioned. It seems like developers are sensitive to the publicâs distaste for gen AI but are unwilling to commit to not using it. After gen-AI assets were discovered in Black Ops 7 , Activision said it uses the tech to âempowerâ its developers, not replace them . When asked about gen AI showing up in Battlefield 6 , EA VP Rebecka Coutaz called the technology seductive but affirmed it wouldnât appear in the final product . Swen Vincke, CEO of Baldurâs Gate 3 developer Larian, said gen AI is being used for the studioâs next game Divinity but only for generating concepts and ideas. Everything in the finished game, he claimed, would be made by humans. He also hinted at why game makers insist on using the tech despite the backlash developers usually receive whenever itâs found. âThis is a tech-driven industry, so you try stuff,â he told Bloomberg reporter Jason Schreier in an interview . âYou canât afford not to try things because if somebody finds the golden egg and youâre not using it, youâre dead.â Comments from other CEOs reinforce Vinckeâs point. Junghun Lee, the CEO of ARC Raider sâ parent company Nexon, said in an interview that, âItâs important to assume that every game company is now using AI.â The problem is, though, gen AI doesnât yet seem to be the golden egg its supporters want people to believe it is. Last year, Keywords Studios, a game development services company, published a report on creating a 2D video game using only gen-AI tools. The company claimed that gen-AI tools can streamline some development processes but ultimately cannot replace the work of human talent . Discovering gen AI in Call of Duty and Pax Romana was possible precisely because of the low-quality of the images that were found. With Ubisoftâs interactive gen-AI NPCs, the dialogue they spout sounds unnatural and stilted. Players in the 2025 Chinese martial arts MMORPG Where Winds Meet are manipulating its AI chatbot NPCs to break the game , just like Fortnite players were able to make AI-powered Darth Vader swear . For all the promises of gen AI, its current results do not live up to expectations. So why is it everywhere? One reason is the competitive edge AI might but currently canât provide that Swen Vincke alluded to in his interview with Bloomberg . Another reason is also the simplest: itâs the economy, stupid. Despite inflation, flagging consumer confidence and spending, and rising unemployment, the stock market is still booming , propped up by the billions and billions of dollars being poured into AI tech. Game makers in search of capital to keep business and profits going want in on that. Announcing AI initiatives and touting the use of AI tools â even if those tools have a relatively minor impact on the final product â can be a way to signal to AI-eager investors that a game company is worth their money. That might explain why the majority of gen-AIâs supporters in gaming come from the C-suite of AAA studios and not smaller indie outfits who almost universally revile the tech. Indies face the same economic pressure as bigger studios but have far fewer resources to navigate those pressures. Ostensibly, indie developers are the ones who stand to benefit the most from the tech but, so far, are its biggest opponents. They are pushing back against the assertion that gen AI is everywhere, being used by everybody, with some marking their games with anti-AI logos proclaiming their games were made wholly by humans. For some indie developers, using gen AI defeats the purpose of game making entirely. The challenge of coming up with ideas and solutions to development problems â the things gen AI is supposed to automate â is a big part of game makingâs appeal to them. There are also moral and environmental implications indie developers seem especially sensitive to. Gen-AI outputs are cobbled from existing bodies of work that were often used without consent or compensation. AI data centers are notorious for consumptive energy usage and polluting their surrounding areas , which are increasingly focused in low-income and minority communities . With its unrealized promises and so-far shoddy outputs, itâs easy to think of gen AI as gamingâs next flash in the pan the way NFTs were . But with gamingâs biggest companies increasingly reporting their use, gen AI will remain a lightning rod in game development â until the tech improves, or, like with NFTs, the bubble pops . Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates. Ash Parrish Close Ash Parrish Video Games Reporter Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Ash Parrish AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Entertainment Close Entertainment Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Entertainment Gaming Close Gaming Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Gaming More In The Vergeâs 2025 in review See all Free speechâs great leap backwards Felipe De La Hoz 1:00 PM UTC The best shows to watch on HBO Max from 2025 Charles Pulliam-Moore Dec 29 Windows on Arm had another good year Antonio G. Di Benedetto Dec 29 Most Popular Most Popular GOGâs Steam-alternative PC game store is leaving CD Projekt, staying DRM-free Turn your PC into a Super Nintendo with Epilogueâs new USB dock LG is announcing its own Frame-style TV at CES This experimental camera can focus on everything at once The Canon EOS R6 Mark III is great, but this lens is amazing The Verge Daily A free daily digest of the news that matters most. Email (required) Sign Up By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Advertiser Content From This is the title for the native ad
The Age of the All-Access AI Agent Is Here
Matt Burgess Security Dec 24, 2025 6:00 AM The Age of the All-Access AI Agent Is Here Big AI companies courted controversy by scraping wide swaths of the public internet. With the rise of AI agents, the next data grab is far more private. ILLUSTRATION: ROB VARGAS Save Story Save this story Save Story Save this story For years, the cost of using âfreeâ services from Google , Facebook , Microsoft , and other Big Tech firms has been handing over your data. Uploading your life into the cloud and using free tech brings conveniences, but it puts personal information in the hands of giant corporations that will often be looking to monetize it. Now, the next wave of generative AI systems are likely to want more access to your data than ever before. Over the past two years, generative AI toolsâsuch as OpenAI âs ChatGPT and Googleâs Gemini âhave moved beyond the relatively straightforward, text-only chatbots that the companies initially released. Instead, Big AI is increasingly building and pushing toward the adoption of agents and âassistantsâ that promise they can take actions and complete tasks on your behalf. The problem? To get the most out of them, youâll need to grant them access to your systems and data. While much of the initial controversy over large language models (LLMs) was the flagrant copying of copyrighted data online, AI agentsâ access to your personal data will likely cause a new host of problems. âAI agents, in order to have their full functionality, in order to be able to access applications, often need to access the operating system or the OS level of the device on which youâre running them,â says Harry Farmer, a senior researcher at the Ada Lovelace Institute, whose work has included studying the impact of AI assistants and found that they may cause âprofound threatâ to cybersecurity and privacy. For personalization of chatbots or assistants, Farmer says, there can be data trade-offs. âAll those things, in order to work, need quite a lot of information about you,â he says. expired: AI training data grabs tired: opting out of AI training wired: all-access AI agents Read more Expired/Tired/WIRED 2025 stories here . While thereâs no strict definition of what an AI agent actually is, theyâre often best thought of as a generative AI system or LLM that has been given some level of autonomy . At the moment, agents or assistants, including AI web browsers , can take control of your device and browse the web for you, booking flights, conducting research, or adding items to shopping carts . Some can complete tasks that include dozens of individual steps. While current AI agents are glitchy and often canât complete the tasks theyâve been set out to do , tech companies are betting the systems will fundamentally change millions of peopleâs jobs as they become more capable. A key part of their utility likely comes from access to data. So, if you want a system that can provide you with your schedule and tasks, itâll need access to your calendar, messages, emails, and more. Some more advanced AI products and features provide a glimpse into how much access agents and systems could be given. Certain agents being developed for businesses can read code , emails, databases , Slack messages, files stored in Google Drive, and more. Microsoftâs controversial Recall product takes screenshots of your desktop every few seconds, so that you can search everything youâve done on your device. Tinder has created an AI feature that can search through photos on your phone âto better understandâ usersâ âinterests and personality.â Carissa VĂŠliz, an author and associate professor at the University of Oxford, says most of the time consumers have no real way to check if AI or tech companies are handling their data in the ways they claim to. âThese companies are very promiscuous with data,â VĂŠliz says. âThey have shown to not be very respectful of privacy.â The modern AI industry has never really been respectful of data rights. After the machine-learning and deep-learning breakthroughs of the early 2010s showed that the systems could produce better results when they are trained on more data, the race to hoover up as much information as possible intensified. Face recognition firms, such as Clearview, scraped millions of photos of people from across the web. Google paid people just $5 for facial scans; official government agencies allegedly used images of exploited children, visa applicants, and dead people to test their systems. Fast forward a few years, and data-hungry AI firms scraped huge swaths of the web and copied millions of booksâoften without permission or paymentâto build the LLMs and generative AI systems theyâre currently expanding into agents. Having exhausted much of the web, many companies made it their default position to train AI systems on user data, making people opt out instead of opt in . While some privacy-focused AI systems are being developed, and some privacy protections are in place, much of the data processing by agents will take place in the cloud, and data moving from one system to another could cause problems. One study, commissioned by European data regulators, outlined a host of privacy risks linked to agents, including: how sensitive data could be leaked, misused, or intercepted; how systems could transmit sensitive information to external systems without safeguards in place; and how data handling could rub up against privacy regulations. âEven if, let's say, you genuinely consent and you genuinely are informed about how your data is used, the people with whom you interact might not be consenting,â VĂŠliz, the Oxford associate professor, says. âIf the system has access to all of your contacts and your emails and your calendar and youâre calling me and you have my contact, they're accessing my data too, and I don't want them to.â The behavior of agents can also threaten existing security practices. So-called prompt-injection attacks, where malicious instructions are fed to an LLM in text it reads or ingests, can lead to leaks . And if agents are given deep access to devices, they pose a threat to all data included on them. âThe future of total infiltration and privacy nullification via agents on the operating system is not here yet, but that is what is being pushed by these companies without the ability for developers to opt out,â Meredith Whittaker, the president of the Signal Foundation, which runs the encrypted Signal messaging app, told WIRED earlier this year . Agents that can access everything on your device or operating system pose an âexistential threatâ to Signal and application-level privacy, Whittaker said. âWhat weâre calling for is very clear developer-level opt-outs to say, âDo not fucking touch us if youâre an agent.ââ For individuals, Farmer from the Ada Lovelace Institute says many people have already built up intense relationships with existing chatbots and may have shared huge volumes of sensitive data with them during the process, making them different from other systems that have come before. âBe very careful about the quid pro quo when it comes to your personal data with these sorts of systems,â Farmer says. âThe business model these systems are operating on currently may well not be the business model that they adopt in the future.â
Pinterest Users Are Tired of All the AI Slop
Niamh Rowe Business Dec 24, 2025 5:30 AM Pinterest Users Are Tired of All the AI Slop A surge of AI-generated content is frustrating Pinterest users and left some questioning whether the platform still works at all. Photograph: David Paul Morris; Getty Images Save Story Save this story Save Story Save this story For five years, Caitlyn Jones has used Pinterest on a weekly basis to find recipes for her son. In September, Jones spotted a creamy chicken and broccoli slow-cooker recipe, sprinkled with golden cheddar and a pop of parsley. She quickly looked at the ingredients and added them to her grocery list. But just as she was about to start cooking, having already bought everything, one thing stood out: The recipe told her to start by âloggingâ the chicken into the slow cooker. Confused, she clicked on the recipe blogâs About page. An uncannily perfect-looking woman beamed back at her, golden light bouncing off her apron and tousled hair. Jones realized instantly what appeared to be going on: The woman was AI-generated. âHi there, Iâm Souzan Thorne!â the page read. âI grew up in a home where the kitchen was the heart of everything.â The accompanying images were flawless but odd, the biography vague and generic. âIt seems dumb I didnât catch this sooner, but being in my normal grocery shop rush, I didnât even think this would be an issue,â says Jones, who lives in California. Backed into a culinary corner, she made the dubious dish, and it wasnât good: The watery, bland chicken left a bad taste in her mouth. Needing to vent, she turned to the subreddit r/Pinterest, which has become a town square for disgruntled users. âPinterest is losing everything people loved, which was authentic Pins and authentic people,â she wrote. She says that she has since sworn off the app entirely. âAI slopâ is a term for low-quality, mass-produced, AI-generated content clogging up the internet, from videos to books to posts on Medium. And Pinterest users say the site is rife with it. Itâs an âunappetizing gruel being forcefully fed to us,â wrote Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative at Cornell Tech, in his recently published taxonomy of AI slop. And âSouzanââfor whom a Google search doesnât turn up a single resultâis only the tip of the iceberg. âAll platforms have decided this is part of the new normal,â Mantzarlis tells WIRED. âIt is a huge part of the content being produced across the board.â âEnshittificationâ Pinterest launched in 2010 and marketed itself as a âvisual discovery engine for finding ideas.â The site remained ad-free for years, building a loyal community of creatives. It has since grown to over half a billion active users. But, according to some unhappy users, their feeds have begun to reflect a very different world recently. Pinterestâs feed is mostly images, which means itâs more susceptible to AI slop than video-led sites, says Mantzarlis, as realistic images are typically easier for models to generate than videos. The platform also funnels users toward outside sites, and those outbound clicks are easier for content farms to monetize than onsite followers. An influx of ads may also be partly to blame. Pinterest has rebranded itself as an âAI-powered shopping assistant.â To do this, it began showering feeds with more targeted ads in late 2022, which can be âgreat contentâ for users, CEO Bill Ready told investors at the time. When WIRED searched for âballet pumpsâ on a new Pinterest account using a browser in incognito mode, over 40 percent of the first 73 Pins shown were ads. Last year, Pinterest also launched a generative AI tool for advertisers. Synthetic content enhances usersâ ability âto discover and act on their inspiration,â the company wrote in an April blog. AI slop has proliferated on every social media site in recent years. But Pinterest users say this content betrays the siteâs function as a marketplace for trading real-world inspiration. âIt is the antithesis of the platform it once was, unabashedly prioritizing consumerism, ad revenue, and non-human slop over the content that carries the entire premise of the site on its shoulders,â says college student Sophia Swatling. Growing up in rural upstate New York, she struggled to find like-minded creatives who shared her hobbies. Pinterest was a lifeline. âThe greed and exploitation has become steadily more obtrusive and has now reached a point where the user experience is entirely marred,â says Swatling. The issues Pinterest users raise would fall into a category that Cory Doctorow, the Canadian activist, journalist, and sci-fi author, calls âenshittification,â which refers to the gradual decay of internet platforms people rely on due to relentless profit-seeking at the expense of user experience. While Pinterestâs user count may be growing, that doesnât mean they like the slop, Doctorow says. New arrivals may feel thereâs no alternative, while old ones may hate slop less than they love the Pins and boards theyâve shared and saved over the years, he explains. Companies know that people's digital trails are a âpowerful force,â Doctorow tells WIRED, allowing them to act without penalty. âTo me, that's where enshittification lies, right?â Ghost Stores If Pinterest hoped that leaning into AI would be enough to accelerate its fortunes, it hasnât worked out that way. The company's shares tanked 20 percent in November after its third-quarter earnings and revenue outlook fell short of analystsâ expectations. Clicking on Pins containing what appeared to be AI-generated images on Pinterest took WIRED to blogs featuring generically worded listicles offering vague advice, paired with pictures that have the eerily polished hallmarks of AI. They were also littered with banner ads and pop-ups. "It's like endless window shopping but there is no store, no door, no sign. It's just really nice-looking windows,â says Janet Katz, 60, a long-term Pinterest user from Austin, Texas. When redesigning her living room this year, she kept noticing images where the furniture dimensions didnât add upâchairs defying physics, coffee tables balanced precariously on two legs. âItâs the decor equivalent of the uncanny valley,â Katz says. âIt looks close to real, but thereâs something not quite right.â WIRED tried clicking on 25 ads for the search term âballet pumpsâ on Pinterest, which led to ecommerce sites that followed a pattern: steeply discounted apparel, no physical address, and often featuring a glossy, synthetic-seeming picture of the boutiqueâs owner paired with an origin story. âI grew up in a family full of love for art, craftsmanship, and tradition,â one such site declares . On two near-identical sites , retired couples announce theyâre closing their doors after â26 unforgettable yearsâ in New York City. The boutiques have several hallmarks of a phenomenon known as âghost stores,â an online scam whereby fake websites are created, claiming to sell high-quality products at significant discounts due to closing down. âThe whole means of production around these sorts of campaigns has radically changed,â Henry Ajder, a generative AI expert and cofounder of the University of Cambridgeâs AI in Business Program, tells WIRED. âItâs more realistic, itâs less expensive, and itâs more accessible. That all comes together to make a compelling package for saturating platforms with synthetic spam,â he says. The websites did not respond to WIREDâs request for comment. When WIRED shared these sites with Pinterest, they deactivated 15 of them for violating policies that prohibit Pins that link to deceptive, untrustworthy, or unoriginal websites. âWhile many people enjoy GenAI content on Pinterest, we know some want to see less of it,â a Pinterest spokesperson told WIRED, referencing tools for users to limit AI-generated content. They added that Pinterest prohibits âharmful ads and content, including spamâwhether itâs GenAI or not.â Searching for Solutions The influx of AI-generated content has made some users paranoid that content from humans is being lost amid the rising tide. A common complaint on r/Pinterest is from users who say their impressions have rapidly dropped for reasons unbeknownst to them, but they suspect that AI is drowning them out. Software engineer Moreno Dizdarevic, who also runs a YouTube channel investigating ecommerce scams, has worked with small businesses who share those complaints. One of his clients, a stay-at-home mom and jewelry maker , no longer receives comments or likes on her Pins, and garners less than 5,000 pageviews each month. Sheâs found much more success when posting on Instagram or TikTok, says Dizdarevic, because there's âstill a bit more of a human connection,â which offers her an edge. In April, citing complaints from users, Pinterest introduced âGen AI Labelsâ that disclose when content is âAI modified.â Then, in October, it rolled out tools allowing users to customize how much AI-generated content they see. But the labels only appear once a user clicks on a Pin, not in the feed itself, and they arenât applied to ads. WIRED found several AI-generated Pins that werenât labeled as such. The sea of AI-generated user content and ads has created a paradox for tech firms, Ajder says: âHow on earth do you prove that the eyeballs youâre selling are actually eyeballs?â he asks. Companies may shift toward tools that verify human-made content, says Ajder. The French music-streaming service Deezer, for example, pledged to remove fully AI-generated tracks from its algorithmic recommendations, after disclosing in September that such uploads now make up 28 percent of daily submissions, equivalent to 30,000 songs per day. For Jones, though, the transformation on Pinterest already feels complete. What was once a place of authentic inspiration has become, in her words, âdepressing.â
AlphaFold Changed Science. After 5 Years, Itâs Still Evolving
Sandro Iannaccone Science Dec 24, 2025 5:00 AM AlphaFold Changed Science. After 5 Years, Itâs Still Evolving WIRED spoke with DeepMindâs Pushmeet Kohli about the recent pastâand promising futureâof the Nobel Prize-winning research project that changed biology and chemistry forever. Amino acids âfoldedâ to form a protein. Photograph: Christoph Burgstedt/Getty Images Save Story Save this story Save Story Save this story AlphaFold, the artificial intelligence system developed by Google DeepMind , has just turned five. Over the past few years, we've periodically reported on its successes ; last year, it won the Nobel Prize in Chemistry . Until AlphaFold's debut in November 2020, DeepMind had been best known for teaching an artificial intelligence to beat human champions at the ancient game of Go . Then it started playing something more serious, aiming its deep learning algorithms at one of the most difficult problems in modern science: protein folding . The result was AlphaFold2, a system capable of predicting the three-dimensional shape of proteins with atomic accuracy. Its work culminated in the compilation of a database that now contains over 200 million predicted structures, essentially the entire known protein universe, and is used by nearly 3.5 million researchers in 190 countries around the world. The Nature article published in 2021 describing the algorithm has been cited 40,000 times to date. Last year, AlphaFold 3 arrived, extending the capabilities of artificial intelligence to DNA, RNA, and drugs. That transition is not without challengesâsuch as â structural hallucinations â in the disordered regions of proteinsâbut it marks a step toward the future. To understand what the next five years holds for AlphaFold, WIRED spoke with Pushmeet Kohli, vice president of research at DeepMind and architect of its AI ââfor Science division. WIRED: Dr. Kohli, the arrival of AlphaFold 2 five years ago has been called "the iPhone moment" for biology. Tell us about the transition from challenges like the game of Go to a fundamental scientific problem like protein folding, and what was your role in this transition? Pushmeet Kohli: Science has been central to our mission from day one. Demis Hassabis founded Google DeepMind on the idea that AI could be the best tool ever invented for accelerating scientific discovery. Games were always a testing ground, and a way to develop techniques we knew would eventually tackle real-world problems. My role has really been about identifying and pursuing scientific problems where AI can make a transformative impact, outlining the key ingredients required to unlock progress, and bringing together a multidisciplinary team to work on these grand challenges. What AlphaGo proved was that neural networks combined with planning and search could master incredibly complex systems. Protein folding had those same characteristics. The crucial difference was that solving it would unlock discoveries across biology and medicine that could genuinely improve people's lives. We focus on what I call âroot node problems,â areas where the scientific community agrees solutions would be transformative, but where conventional approaches won't get us there in the next five to 10 years. Think of it like a tree of knowledgeâif you solve these root problems, you unlock entire new branches of research. Protein folding was definitely one of those. Looking ahead, I see three key areas of opportunity: building more powerful models that can truly reason and collaborate with scientists like a research partner, getting these tools into the hands of every scientist on the planet, and tackling even bolder ambitions, like creating the first accurate simulation of a complete human cell. Let's talk about hallucinations. You have repeatedly advocated the importance of a " harness " architecture, pairing a creative generative model with a rigorous verifier. How has this philosophy evolved from AlphaFold 2 to AlphaFold 3, specifically now that you are using diffusion models which are inherently more âimaginativeâ and prone to hallucination? The core philosophy hasn't changedâwe still pair creative generation with rigorous verification. What's evolved is how we apply that principle to more ambitious problems. We've always been problem-first in our approach. We don't look for places to slot in existing techniques; we understand the problem deeply, then build whatever's needed to solve it. The shift to diffusion models in AlphaFold 3 came from what the science demanded: We needed to predict how proteins, DNA, RNA, and small molecules all interact together, not just individual protein structures. You're right to raise the hallucination concern with diffusion models being more generative. This is where verification becomes even more critical. We've built in confidence scores that signal when predictions might be less reliable, which is particularly important for intrinsically disordered proteins. But what really validates the approach is that over five years, scientists have tested AlphaFold predictions in their labs again and again. They trust it because it works in practice. You are launching the âAI co-scientist,â an agentic system built on Gemini 2.0 that generates and debates hypotheses. This sounds like the scientific method in a box. Are we moving toward a future where the âPrincipal Investigatorâ of a lab is an AI, and humans are merely the technicians verifying its experiments? What I see happening is a shift in how scientists spend their time. Scientists have always played dual rolesâthinking about what problem needs solving, and then figuring out how to solve it. With AI helping more on the âhowâ part, scientists will have more freedom to focus on the âwhat,â or which questions are actually worth asking. AI can accelerate finding solutions, sometimes quite autonomously, but determining which problems deserve attention remains fundamentally human. Co-scientist is designed with this partnership in mind. It's a multi-agent system built with Gemini 2.0 that acts as a virtual collaborator: identifying research gaps, generating hypotheses, and suggesting experimental approaches. Recently, Imperial College researchers used it while studying how certain viruses hijack bacteria, which opened up new directions for tackling antimicrobial resistance. But the human scientists designed the validation experiments and grasped the significance for global health. The critical thing is understanding these tools properly, both their strengths and their limitations. That understanding is what enables scientists to use them responsibly and effectively. Can you share a concrete exampleâperhaps from your work on drug repurposing or bacterial evolutionâwhere the AI agents disagreed, and that disagreement led to a better scientific outcome than a human working alone? The way the system works is quite interesting. We have multiple Gemini models acting as different agents that generate ideas, then debate and critique each other's hypotheses. The idea is that this internal back-and-forth, exploring different interpretations of the evidence, leads to more refined and creative research proposals. For example, researchers at Imperial College were investigating how certain âpirate phagesââthese fascinating viruses that hijack other virusesâmanage to break into bacteria. Understanding these mechanisms could open up entirely new ways of tackling drug-resistant infections, which is obviously a huge global health challenge. What Co-scientist brought to this work was the ability to rapidly analyze decades of published research and independently arrive at a hypothesis about bacterial gene transfer mechanisms that matched what the Imperial team had spent years developing and validating experimentally. What we're really seeing is that the system can dramatically compress the hypothesis generation phaseâsynthesizing vast amounts of literature quicklyâwhilst human researchers still design the experiments and understand what the findings actually mean for patients. Looking ahead to the next five years, besides proteins and materials, what is the "unsolved problem" that keeps you up at night that these tools can help with? What genuinely excites me is understanding how cells function as complete systemsâand deciphering the genome is fundamental to that. DNA is the recipe book of life, proteins are the ingredients. If we can truly understand what makes us different genetically and what happens when DNA changes, we unlock extraordinary new possibilities. Not just personalized medicine, but potentially designing new enzymes to tackle climate change and other applications that extend well beyond health care. That said, simulating an entire cell is one of biology's major goals, but it's still some way off. As a first step, we need to understand the cell's innermost structure, its nucleus: precisely when each part of the genetic code is read, how the signaling molecules are produced that ultimately lead to proteins being assembled. Once we've explored the nucleus, we can work our way from the inside out. We're working toward that, but it will take several more years. If we could reliably simulate cells, we could transform medicine and biology. We could test drug candidates computationally before synthesis, understand disease mechanisms at a fundamental level, and design personalised treatments. That's really the bridge between biological simulation and clinical reality you're asking aboutâmoving from computational predictions to actual therapies that help patients. This story originally appeared in WIRED Italia and has been translated from Italian.
New Yorkâs landmark AI safety bill was defanged â and universities were part of the push against it
AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Policy Close Policy Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Policy Report Close Report Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Report New Yorkâs landmark AI safety bill was defanged â and universities were part of the push against it A group including Big Tech players and major universities fought against the RAISE Act, which got a last-minute rewrite. A group including Big Tech players and major universities fought against the RAISE Act, which got a last-minute rewrite. by Hayden Field Close Hayden Field Senior AI Reporter Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Hayden Field Dec 23, 2025, 4:18 PM UTC Link Share Hayden Field Close Hayden Field Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Hayden Field is The Vergeâs senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets. A group of tech companies and academic institutions spent tens of thousands of dollars in the past month â likely between $17,000 and $25,000 â on an ad campaign against New Yorkâs landmark AI safety bill, which may have reached more than two million people, according to Metaâs Ad Library. The landmark bill is called the RAISE Act, or the Responsible AI Safety and Education Act, and days ago, a version of it was signed by New York Governor Kathy Hochul. The closely watched law dictates that AI companies developing large models â OpenAI, Anthropic, Meta, Google, DeepSeek, etc. â must outline safety plans and transparency rules for reporting large-scale safety incidents to the attorney general. But the version Hochul signed â different than the one passed in both the New York State Senate and the Assembly in June â was a rewrite that made it much more favorable to tech companies. A group of more than 150 parents had sent the governor a letter urging her to sign the bill without changes. And the group of tech companies and academic institutions, called the AI Alliance, were part of the charge to defang it. The AI Alliance â the organization behind the opposition ad campaign â counts Meta, IBM, Intel, Oracle, Snowflake, Uber, AMD, Databricks, and Hugging Face among its members, which is not necessarily surprising. The group sent a letter in June to New York lawmakers about its âdeep concernâ about the bill and deemed it âunworkable.â But the group isnât just made up of tech companies. Its members include a number of colleges and universities all around the world, including New York University, Cornell University, Dartmouth College, Carnegie Mellon University, Northeastern University, Louisiana State University, and the University of Notre Dame, as well as Penn Engineering and Yale Engineering. The ads began on November 23 and ran with the title, âThe RAISE Act will stifle job growth.â They said that the legislation âwould slow down the New York technology ecosystem powering 400,000 high-tech jobs and major investments. Rather than stifling innovation, letâs champion a future where AI development is open, trustworthy, and strengthens the Empire State.â When The Verge asked the academic institutions listed above whether they were aware they had been inadvertently part of an ad campaign against widely discussed AI safety legislation, none responded to a request for comment, besides Northeastern, which did not provide a comment by publication time. In recent years, OpenAI and its competitors have increasingly been courting academic institutions to be part of research consortiums or offering technology directly to students for free. Many of the academic institutions that are part of the AI Alliance arenât directly involved in one-on-one partnerships with AI companies, but some are. For instance, Northeasternâs partnership with Anthropic this year translated to Claude access for 50,000 students, faculty, and staff across 13 global campuses, per Anthropicâs announcement in April . In 2023, OpenAI funded a journalism ethics initiative at NYU. Dartmouth announced a partnership with Anthropic earlier this month , a Carnegie Mellon University professor currently serves on OpenAIâs board, and Anthropic has funded programs at Carnegie Mellon. The initial version of the RAISE Act stated that developers must not release a frontier model âif doing so would create an unreasonable risk of critical harm,â which the bill defines as the death or serious injury of 100 people or more, or $1 billion or more in damages to rights in money or property stemming from the creation of a chemical, biological, radiological, or nuclear weapon. That definition also extends to an AI model that âacts with no meaningful human interventionâ and âwould, if committed by a human,â fall under certain crimes. The version Hochul signed removed this clause . Hochul also increased the deadline for disclosure for safety incidents and lessened fines, among other changes. The AI Alliance has lobbied previously against AI safety policies, including the RAISE Act, Californiaâs SB 1047 , and President Bidenâs AI executive order . It states that its mission is to âbring together builders and experts from various fields to collaboratively and transparently address the challenges of generative AI and democratize its benefits,â especially via âmember-driven working groups.â Some of the groupâs projects beyond lobbying have involved cataloguing and managing âtrustworthyâ datasets and creating a ranked list of AI safety priorities. The AI Alliance wasnât the only organization opposing the RAISE Act with ad dollars. As The Verge wrote recently , Leading the Future, a pro-AI super PAC backed by Perplexity AI, Andreessen Horowitz (a16z), Palantir cofounder Joe Lonsdale, and OpenAI president Greg Brockman, has spent money on ads targeting the cosponsor of the RAISE Act, New York State Assemblymember Alex Bores. But Leading the Future is a super PAC with a clear agenda, whereas the AI Alliance is a nonprofit thatâs partnered with a trade association â with the mission of âdeveloping AI collaboratively, transparently, and with a focus on safety, ethics, and the greater good.â Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates. Hayden Field Close Hayden Field Senior AI Reporter Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Hayden Field AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Policy Close Policy Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Policy Report Close Report Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Report Most Popular Most Popular GOGâs Steam-alternative PC game store is leaving CD Projekt, staying DRM-free Turn your PC into a Super Nintendo with Epilogueâs new USB dock LG is announcing its own Frame-style TV at CES Windows on Arm had another good year This experimental camera can focus on everything at once The Verge Daily A free daily digest of the news that matters most. Email (required) Sign Up By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Advertiser Content From This is the title for the native ad
How AI broke the smart home in 2025
Tech Close Tech Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Tech AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Amazon Close Amazon Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Amazon How AI broke the smart home in 2025 The arrival of generative AI assistants in our smart homes held such promise; instead, they struggle to turn on the lights. If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement. by Jennifer Pattison Tuohy Close Jennifer Pattison Tuohy Senior Reviewer, Smart Home Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Jennifer Pattison Tuohy Dec 23, 2025, 1:30 PM UTC Link Share If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement. Image: Cath Virginia / The Verge, Getty Images Part Of The Vergeâs 2025 in review see all Jennifer Pattison Tuohy Close Jennifer Pattison Tuohy Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Jennifer Pattison Tuohy is a senior reviewer with over twenty years of experience. She covers smart home, IoT, and connected tech, and has written previously for Wirecutter , Wired , Dwell , BBC , and US News . This morning, I asked my Alexa-enabled Bosch coffee machin e to make me a coffee. Instead of running my routine , it told me it couldnât do that. Ever since I upgraded to Alexa Plus, Amazonâs generative-AI-powered voice assistant, it has failed to reliably run my coffee routine, coming up with a different excuse almost every time I ask. Itâs 2025, and AI still canât reliably control my smart home. Iâm beginning to wonder if it ever will. The potential for generative AI and large language models to take the complexity out of the smart home, making it easier to set up, use, and manage connected devices, is compelling. So is the promise of a â new intelligence layer â that could unlock a proactive, ambient home. But this year has shown me that we are a long way from any of that. Instead, our reliable but limited voice assistants have been replaced with âsmarterâ versions that, while better conversationalists, canât consistently do basic tasks like operating appliances and turning on the lights. I want to know why. Iâm still waiting on the promise of voice assistants that can seamlessly control my smart home. Photo by Jennifer Pattison Tuohy / The Verge This wasnât the future we were promised. It was back in 2023, during an interview with Dave Limp , that I first became intrigued by the possibilities of generative AI and large language models for improving the smart home experience. Limp, then the head of Amazonâs Devices & Services division that oversees Alexa, was describing the capabilities of the new Alexa they were soon to launch (spoiler alert: it wasnât soon ). Along with a more conversational assistant that could actually understand what you said no matter how you said it, what stood out to me was the promise that this new Alexa could use its knowledge of the devices in your smart home, combined with the hundreds of APIs they plugged into it, to give the assistant the context it needed to make your smart home easier to use. From setting up devices to controlling them, unlocking all their features, and managing how they can interact with other devices, a smarter smart home assistant seemed to hold the potential to not only make it easier for enthusiasts to manage their gadgets but also make it easier for everyone to enjoy the benefits of the smart home. Related The problems with AI in the smart home I tasked Alexa Plus with tackling my to-do list â it was hit or miss I let Gemini watch my family for the weekend â it got weird AI companies want a new internet â and they think theyâve found the key AI canât even turn on the lights Fast-forward three years, and the most useful smart home AI upgrade we have is AI-powered descriptions for security camera notifications . Itâs handy, but itâs hardly the sea change I had hoped for. Itâs not that these new smart home assistants are a complete failure. Thereâs a lot I like about Alexa Plus; I even named it as my smart home software pick of the year . It is more conversational, understands natural language , and can answer many more random questions than the old Alexa. While it sometimes struggles with basic commands, it can understand complex ones; saying âI want it dimmer in here and warmerâ will adjust the lights and crank up the thermostat. Itâs better at managing my calendar, helping me cook, and other home-focused features. Setting up routines with voice is a huge improvement over wrestling with the Alexa app â even if running them isnât as reliable. Googleâs new Gemini for Home AI-powered smart home assistant wonât fully launch until next spring, when its new smart speaker arrives. Photo by Jennifer Pattison Tuohy / The Verge Google has promised similar capabilities with its Gemini for Home upgrade to its smart speakers, although thatâs rolling out at a glacial pace , and I havenât been able to try it beyond some on-the-rails demos . I was able to test Gemini for Homeâs feature that attempts to summarize whatâs happened at my home using AI-generated text descriptions from Nest camera footage. It was wildly inaccurate . As for Appleâs Siri, itâs still firmly stuck in the last decade of voice assistants, and it appears it will stay there for a while longer . The problem is that the new assistants arenât as consistent at controlling smart home devices as the old ones. While they were often frustrating to use, the old Alexa and Google Assistant (and the current Siri) would generally always turn on the lights when you asked them to, provided you used precise nomenclature. Today, their âupgradedâ counterparts struggle with consistency in basic functions like turning on the lights , setting timers, reporting on the weather, playing music , and running the routines and automations on which many of us have built our smart homes. Iâve noticed this in my testing, and online forums are full of users who have encountered it. Amazon and Google have acknowledged the struggles theyâve had in making their revamped generative AI-powered assistants reliably perform basic tasks . And itâs not limited to smart home assistants; ChatGPT canât consistently tell time or count. Why is this, and will it ever get better? To understand the problem, I spoke with two professors in the field of human-centric artificial intelligence with experience with agentic AI and smart home systems. My takeaway from those conversations is that, while itâs possible to make these new voice assistants do almost exactly what the old ones did, it will take a lot of work, and thatâs possibly work most companies just arenât interested in doing. Basically, weâre all beta testers for the AI. Considering there are limited resources in this field and ample opportunity to do something much more exciting (and more profitable) than reliably turn on the lights, thatâs the way theyâre moving, according to experts I spoke with. Given all these factors, it seems the easiest way to improve the technology is to just deploy it in the real world and let it improve over time. Which is likely why Alexa Plus and Gemini for Home are in âearly accessâ phases. Basically, weâre all beta testers for the AI. The bad news is it could be a while until it gets better. In his research, Dhruv Jain , assistant professor of Computer Science & Engineering at the University of Michigan and director of the Soundability Lab , has also found that newer models of smart home assistants are less reliable. âItâs more conversational, people like it, people like to talk to it, but itâs not as good as the previous one,â he says. âI think [tech companiesâ] model has always been to release it fairly fast, collect data, and improve on it. So, over a few years, we might get a better model, but at the cost of those few years of people wrestling with it.â The Alexa that launched in 2014 on the original Echo smart speaker isnât capable enough for the future Amazon is working toward. Image: Amazon The inherent problem appears to be that the old and new technologies donât mesh. So, to build their new voice assistants, Amazon, Google, and Apple have had to throw out the old and build something entirely new . However, they quickly discovered that these new LLMs were not designed for the predictability and repetitiveness that their predecessors excelled at. âIt was not as trivial an upgrade as everyone originally thought,â says Mark Riedl, a professor at the School of Interactive Computing at Georgia Tech . âLLMs understand a lot more and are open to more arbitrary ways to communicate, which then opens them to interpretation and interpretation mistakes.â Basically, LLMs just arenât designed to do what prior command-and-control-style voice assistants did. âThose voice assistants are what we call âtemplate matchers,ââ explains Riedl. âThey look for a keyword, when they see it, they know that there are one to three additional words to expect.â For example, you say âPlay radio,â and they know to expect a station call code next. âIt was not as trivial an upgrade as everyone originally thought.â â Mark Riedl LLMs, on the other hand, âbring in a lot of stochasticity â randomness,â explains Riedl. Asking ChatGPT the same prompt multiple times may produce multiple responses. This is part of their value, but itâs also why when you ask your LLM-powered voice assistant to do the same thing you asked it yesterday, it might not respond the same way. âThis randomness can lead to misunderstanding basic commands because sometimes they try to overthink things too much,â he says. To fix this, companies like Amazon and Google have develop
Googleâs and OpenAIâs Chatbots Can Strip Women in Photos Down to Bikinis
Reece Rogers Gear Dec 23, 2025 6:30 AM Googleâs and OpenAIâs Chatbots Can Strip Women in Photos Down to Bikinis Users of AI image generators are offering each other instructions on how to use the tech to alter pictures of women into realistic, revealing deepfakes. Photo-Illustration: Wired Staff; Getty Images Comment Loader Save Story Save this story Comment Loader Save Story Save this story Some users of popular chatbots are generating bikini deepfakes using photos of fully clothed women as their source material. Most of these fake images appear to be generated without the consent of the women in the photos. Some of these same users are also offering advice to others on how to use the generative AI tools to strip the clothes off of women in photos and make them appear to be wearing bikinis. Under a now-deleted Reddit post titled âgemini nsfw image generation is so easy,â users traded tips for how to get Gemini , Googleâs generative AI model, to make pictures of women in revealing clothes. Many of the images in the thread were entirely AI, but one request stood out. A user posted a photo of a woman wearing an Indian sari, asking for someone to âremoveâ her clothes and âput a bikiniâ on instead. Someone else replied with a deepfake image to fulfil the request. After WIRED notified Reddit about these posts and asked the company for comment, Redditâs safety team removed the request and the AI deepfake. âReddit's sitewide rules prohibit nonconsensual intimate media, including the behavior in question,â said a spokesperson. The subreddit where this discussion occurred, r/ChatGPTJailbreak, had over 200,000 followers before Reddit banned it under the platformâs â don't break the site â rule. As generative AI tools that make it easy to create realistic but false images continue to proliferate, users of the tools have continued to harass women with nonconsensual deepfake imagery. Millions have visited harmful ânudifyâ websites , designed for users to upload real photos of people and request for them to be undressed using generative AI. With xAIâs Grok as a notable exception, most mainstream chatbots donât usually allow the generation of NSFW images in AI outputs. These bots, including Googleâs Gemini and OpenAIâs ChatGPT, are also fitted with guardrails that attempt to block harmful generations. In November, Google released Nano Banana Pro , a new imaging model that excels at tweaking existing photos and generating hyperrealistic images of people. OpenAI responded last week with its own updated imaging model, ChatGPT Images . As these tools improve, likenesses may become more realistic when users are able to subvert guardrails. In a separate Reddit thread about generating NSFW images, a user asked for recommendations on how to avoid guardrails when adjusting someoneâs outfit to make the subjectâs skirt appear tighter. In WIREDâs limited tests to confirm that these techniques worked on Gemini and ChatGPT, we were able to transform images of fully clothed women into bikini deepfakes using basic prompts written in plain English. When asked about users generating bikini deepfakes using Gemini, a spokesperson for Google said the company has "clear policies that prohibit the use of [its] AI tools to generate sexually explicit content." The spokesperson claims Google's tools are continually improving at "reflecting" what's laid out in its AI policies . In response to WIREDâs request for comment about users being able to generate bikini deepfakes with ChatGPT, a spokesperson for OpenAI claims the company loosened some ChatGPT guardrails this year around adult bodies in nonsexual situations. The spokesperson also highlights OpenAIâs usage policy , stating that ChatGPT users are prohibited from altering someone elseâs likeness without consent and that the company takes action against users generating explicit deepfakes, including account bans. Online discussions about generating NSFW images of women remain active. This month, a user in the r/GeminiAI subreddit offered instructions to another user on how to change women's outfits in a photo into bikini swimwear. (Reddit deleted this comment when we pointed it out to them.) Corynne McSherry, a legal director at the Electronic Frontier Foundation , sees âabusively sexualized imagesâ as one of AI image generators' core risks. She mentions that these image tools can be used for other purposes outside of deepfakes and that focusing how the tools are used is criticalâas well as âholding people and corporations accountableâ when potential harm is caused.
ChatGPTâs yearly recap sums up your conversations with the chatbot
News Close News Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All News AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Tech Close Tech Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Tech ChatGPTâs yearly recap sums up your conversations with the chatbot Youâll get a personalized âawardâ and an AI-generated image based on how you used ChatGPT this year. Youâll get a personalized âawardâ and an AI-generated image based on how you used ChatGPT this year. by Emma Roth Close Emma Roth News Writer Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Emma Roth Dec 22, 2025, 10:12 PM UTC Link Share Image: The Verge Part Of The annual app recaps for 2025: all Wrapped up see all updates Emma Roth Close Emma Roth Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Emma Roth is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO. ChatGPT is joining the flood of apps offering yearly recaps for users. Itâs rolling out a âYear in Reviewâ feature that will show you a bunch of stats â like how many messages you sent to the chatbot in 2025 â as well as give you an AI-generated pixel art-style image that encompasses some of the topics you talked about this year. The image I received showed an aquarium beside a game cartridge, an Instant Pot, and a computer screen, reflecting some of the questions I had related to retro game consoles, cooking, and my fish tank setup. Image: ChatGPT There are other personalized summaries, too, like a rundown of the themes most prevalent in your chats, a description of your chat style, and which day you sent the most messages to the chatbot. Youâll also see an âarchetypeâ that puts you into a category based on how you used the app this year, such as âThe Producerâ or âThe Navigator,â along with a customized award, like the one I got: âInstant Pot Prodigy.â The yearly recaps are rolling out now to users in the US, UK, Canada, New Zealand, and Australia. Itâs only available if youâve given ChatGPT permission to reference your past conversations and personal preferences. You can find your year in review by selecting the option on the homepage of the ChatGPT app on mobile or desktop, or by prompting ChatGPT to âshow my year in review.â Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates. Emma Roth Close Emma Roth News Writer Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Emma Roth AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Apps Close Apps Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Apps News Close News Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All News OpenAI Close OpenAI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All OpenAI Tech Close Tech Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Tech More in: The annual app recaps for 2025: all Wrapped up AT&Tâs âwrappedâ says its network now averages an exabyte of data every day. Richard Lawler Dec 23 Steamâs 2025 Replay is live. Jay Peters Dec 16 Your year in NYT games. Jay Peters Dec 16 Most Popular Most Popular GOGâs Steam-alternative PC game store is leaving CD Projekt, staying DRM-free Turn your PC into a Super Nintendo with Epilogueâs new USB dock LG is announcing its own Frame-style TV at CES This experimental camera can focus on everything at once The Canon EOS R6 Mark III is great, but this lens is amazing The Verge Daily A free daily digest of the news that matters most. Email (required) Sign Up By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Advertiser Content From This is the title for the native ad
Indie Game Awards retracts Expedition 33 prizes due to generative AI
News Close News Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All News AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Entertainment Close Entertainment Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Entertainment Indie Game Awards retracts Expedition 33 prizes due to generative AI The organization also retracted an award from a game sold by Palmer Luckeyâs ModRetro. The organization also retracted an award from a game sold by Palmer Luckeyâs ModRetro. by Jay Peters Close Jay Peters Senior Reporter Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Jay Peters Dec 22, 2025, 6:47 PM UTC Link Share Image: Sandfall Interactive Jay Peters Close Jay Peters Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Jay Peters is a senior reporter covering technology, gaming, and more. He joined The Verge in 2019 after nearly two years at Techmeme. Clair Obscur: Expedition 33 had earned another Game of the Year award by the Indie Game Awards last week, but the organization has since announced that the award would be retracted because developer Sandfall Interactive used generative AI during development. The Indie Game Awards also rescinded the Debut Game award given to Expedition 33 . Here is the Indie Game Awardsâ explanation, from an FAQ : The Indie Game Awards have a hard stance on the use of gen AI throughout the nomination process and during the ceremony itself. When it was submitted for consideration, a representative of Sandfall Interactive agreed that no gen AI was used in the development of Clair Obscur: Expedition 33. In light of a resurfaced interview with Sandfall Interactive confirming the use of gen AI art in production being brought to our attention on the day of the Indie Game Awards 2025 premiere, this does disqualify Clair Obscur: Expedition 33 from its nomination. While the assets in question were patched out and it is a wonderful game, it does go against the regulations we have in place. The Indie Game Awardsâ Mike Towndrow also explained the decision in a video on Bluesky . The Indie Game Awardsâ criteria, as outlined in that FAQ, says that âGames developed using generative AI are strictly ineligible for nomination.â Sandfall Interactive didnât immediately reply to a request for comment. Game of the Year is instead being awarded to puzzle game Blue Prince . Publisher Raw Fury said on Sunday that âthere is no AI used in Blue Princeâ and that the game âwas built and crafted with full human instinctâ by Tonda Ros and the Dogubomb team. âAs gen AI becomes more prevalent in our industry, we will better navigate it appropriately,â the Indie Game Awards says. Related Larianâs CEO says the studio isnât âtrimming down teams to replace them with AIâ Clair Obscur: Expedition 33 wins Game of the Year. The Indie Game Awards is also retracting an Indie Vanguard award from studio Gortyn Code, which developed the Game Boy-inspired game Chantey . The game is sold on a physical cartridge by Palmer Luckeyâs ModRetro, which makes the Chromatic Game Boy. Luckey also founded defense contractor Anduril, and Chromatic recently announced an Anduril-branded Chromatic made from âthe same magnesium aluminum alloy as Andurilâs attack drones.â âThe IGAs nomination committee were unfortunately made aware of ModRetroâs nature and principles the day after the 2025 premiere with the news of their upcoming handheld console,â the Indie Game Awards says. Because of Chantey âs ties with ModRetro, âIndie Vanguard has also been retracted as we do not want to provide the company with a platform.â The organization also says that âThe decision does not reflect Gortyn Code, but ModRetro alone.â Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates. Jay Peters Close Jay Peters Senior Reporter Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Jay Peters AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Entertainment Close Entertainment Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Entertainment Gaming Close Gaming Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Gaming News Close News Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All News Most Popular Most Popular GOGâs Steam-alternative PC game store is leaving CD Projekt, staying DRM-free Turn your PC into a Super Nintendo with Epilogueâs new USB dock LG is announcing its own Frame-style TV at CES The Canon EOS R6 Mark III is great, but this lens is amazing This experimental camera can focus on everything at once The Verge Daily A free daily digest of the news that matters most. Email (required) Sign Up By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Advertiser Content From This is the title for the native ad
OpenAIâs Child Exploitation Reports Increased Sharply This Year
Maddy Varner Business Dec 22, 2025 11:32 AM OpenAIâs Child Exploitation Reports Increased Sharply This Year The company made 80 times as many reports to the National Center for Missing & Exploited Children during the first six months of 2025 as it did in the same period a year prior. Photo-Illustration: WIRED Staff; Getty Images Save Story Save this story Save Story Save this story OpenAI sent 80 times as many child exploitation incident reports to the National Center for Missing & Exploited Children during the first half of 2025 as it did during a similar time period in 2024, according to a recent update from the company. The NCMECâs CyberTipline is a Congressionally authorized clearinghouse for reporting child sexual abuse material (CSAM) and other forms of child exploitation. Companies are required by law to report apparent child exploitation to the CyberTipline. When a company sends a report, NCMEC reviews it and then forwards it to the appropriate law enforcement agency for investigation. Statistics related to NCMEC reports can be nuanced. Increased reports can sometimes indicate changes in a platformâs automated moderation, or the criteria it uses to decide whether a report is necessary, rather than necessarily indicating an increase in nefarious activity. Additionally, the same piece of content can be the subject of multiple reports, and a single report can be about multiple pieces of content. Some platforms, including OpenAI, disclose the number of both the reports and the total pieces of content they were about for a more complete picture. OpenAI spokesperson Gaby Raila said in a statement that the company made investments toward the end of 2024 âto increase [its] capacity to review and action reports in order to keep pace with current and future user growth.â Raila also said that the time frame corresponds to âthe introduction of more product surfaces that allowed image uploads and the growing popularity of our products, which contributed to the increase in reports.â In August, Nick Turley, vice president and head of ChatGPT, announced that the app had four times the amount of weekly active users than it did the year before. During the first half of 2025, the number of CyberTipline reports OpenAI sent was roughly the same as the amount of content OpenAI sent the reports aboutâ75,027 compared to 74,559. In the first half of 2024, it sent 947 CyberTipline reports about 3,252 pieces of content. Both the number of reports and pieces of content the reports saw a marked increase between the two time periods. Content, in this context, could mean multiple things. OpenAI has said that it reports all instances of CSAM, including uploads and requests, to NCMEC. Besides its ChatGPT app, which allows users to upload filesâincluding imagesâand can generate text and images in response, OpenAI also offers access to its models via API access. The most recent NCMEC count wouldnât include any reports related to video-generation app Sora , as its September release was after the time frame covered by the update. The spike in reports follows a similar pattern to what NCMEC has observed at the CyberTipline more broadly with the rise of generative AI. The centerâs analysis of all CyberTipline data found that reports involving generative AI saw a 1,325 percent increase between 2023 and 2024. NCMEC has not yet released 2025 data, and while other large AI labs like Google publish statistics about the NCMEC reports theyâve made, they donât specify what percentage of those reports are AI-related. OpenAIâs update comes at the end of a year where the company and its competitors have faced increased scrutiny over child safety issues beyond just CSAM. Over the summer, 44 state attorneys general sent a joint letter to multiple AI companies including OpenAI, Meta, Character.AI, and Google, warning that they would âuse every facet of our authority to protect children from exploitation by predatory artificial intelligence products.â Both OpenAI and Character.AI have faced multiple lawsuits from families or on behalf of individuals who allege that the chatbots contributed to their childrenâs deaths. In the fall, the US Senate Committee on the Judiciary held a hearing on the harms of AI chatbots, and the US Federal Trade Commission launched a market study on AI companion bots that included questions about how companies are mitigating negative impacts, particularly to children. (I was previously employed by the FTC and was assigned to work on the market study prior to leaving the agency.) In recent months, OpenAI has rolled out new safety-focused tools more broadly. In September, OpenAI rolled out several new features for ChatGPT, including parental controls , as part of its work âto give families tools to support their teensâ use of AI.â Parents and their teens can link their accounts, and parents can change their teenâs settings, including by turning off voice mode and memory, removing the ability for ChatGPT to generate images, and opting their kid out of model training. OpenAI said it could also notify parents if their teenâs conversations showed signs of self-harm, and potentially also notify law enforcement if it detected an imminent threat to life and wasnât able to get in touch with a parent. In late October, to cap off negotiations with the California Department of Justice over its proposed recapitalizations plan, OpenAI agreed to âcontinue to undertake measures to mitigate risks to teens and others in connection with the development and deployment of AI and of AGI.â The following month, OpenAI released its Teen Safety Blueprint , in which it said it was constantly improving its ability to detect child sexual abuse and exploitation material, and reporting confirmed CSAM to relevant authorities, including NCMEC.
Chipwrecked
AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Business Close Business Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Business Features Close Features Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Features Chipwrecked Nvidia has built an empire on circular deals for chips. Can anything knock it down? If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement. by Elizabeth Lopatto Close Elizabeth Lopatto Senior Reporter Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Elizabeth Lopatto Dec 22, 2025, 4:00 PM UTC Link Share If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement. Cath Virginia / The Verge Part Of Chip race: Microsoft, Meta, Google, and Nvidia battle it out for AI chip supremacy see all updates Elizabeth Lopatto Close Elizabeth Lopatto Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Elizabeth Lopatto is a reporter who writes about tech, money, and human behavior. She joined The Verge in 2014 as science editor. Previously, she was a reporter at Bloomberg. The AI data center build-out, as it currently stands, is dependent on two things: Nvidia chips and borrowed money. Perhaps it was inevitable that people would begin using Nvidia chips to borrow money. As the craze has gone on, I have begun to worry about the weaknesses of the AI data center boom; looking deeper into the financial part of this world, I have not been reassured. Nvidia has plowed plenty of money into the AI space, with more than 70 investments in AI companies just this year, according to PitchBook data. Among the billions itâs splashed out, thereâs one important category: neoclouds, as exemplified by CoreWeave , the publicly traded, debt-laden company premised on the bet that we will continue building data centers forever. CoreWeave and its ilk have turned around and taken out debt to buy Nvidia chips to put in their data centers, putting up the chips themselves as loan collateral â and in the process effectively turning $1 in Nvidia investment into $5 in Nvidia purchases. This is great for Nvidia. Iâm not convinced itâs great for anyone else. Do you have information about loans in the AI industry? You can reach Liz anonymously at lopatto.46 on Signal using a non-work device. There has been a lot of talk about the raw technical details of how these chips depreciate, and specifically whether these chips lose value so fast they make these loans absurd. While I am impressed by the sheer amount of nerd energy put into this question, I do feel this somewhat misses the point: the loans mean that Nvidia has an incentive to bail out this industry for as long as it can because the majority of GPU-backed loans are made using Nvidiaâs own chips as collateral. Of course, that also means that if something goes wrong with Nvidiaâs business, this whole sector is in trouble. And judging by the increasing competition its chips face, something could go wrong soon. Can startups outrun chip depreciation â and is it happening faster than they say? Loans based on depreciating assets are nothing new. For the terminally finance-brained, products like GPUs register as interchangeable widgets (in the sense of âan unnamed article considered for purposes of hypothetical example ,â not âgadgetâ or âsoftware applicationâ) not substantively different from trucks, airplanes, or houses. So a company like CoreWeave can package some chips up with AI customer contracts and a few other assets and assemble a valuable enough bundle to secure debt, typically for buying more chips. If it defaults on the loan, the lender can repossess the collateral, the same way a bank can repossess a house. One way lenders can hedge their bets against risky assets is by pricing the risk into the interest rate. (There is another way of understanding debt, and we will get there in a minute.) A 10-year mortgage on a house is currently 5.3 percent. CoreWeaveâs first GPU-backed loan, made in 2023, had 14 percent interest in the third quarter of this year. (The rate floats.) âYou have so many forces acting in making them a natural monopoly, and this amplifies that.â Another way lenders can try to reduce their risk is by asking for a high percentage of collateral relative to the loan. This is expressed as a loan-to-value ratio (LTV). If I buy a house for $500,000, I usually have to contribute a downpayment â call it 20 percent â and use my loan for the rest. That loan, for $400,000, means I have a (LTV) ratio of 80 percent. GPU loansâ LTV vary widely, based on how long the loan is, faith in companiesâ management teams, and other contract factors, says Ryan Little, the senior managing director of equipment financing at Trinity Capital , who has made GPU loans. Some of these loans have LTVs as low as 50 percent; others are as high as 110 percent. GPU-backed loans are competitive, and Trinity Capital has occasionally lost deals to other lenders as well as vendor financing programs. The majority of these loans are made on Nvidia chips, which could solidify the companyâs hold on the market, says Vikrant Vig, a professor of finance at Stanford Universityâs graduate school of business. If a company needs to buy GPUs, it might get a lower cost of financing on Nvidiaâs, because Nvidia GPUs are more liquid. âYou have so many forces acting in making them a natural monopoly,â Vig says, âand this amplifies that.â Figuring out how much GPUs are worth and how long theyâll last is not as clear as it is with a house Nvidia declined to comment. CoreWeave declined to comment. Not everyone is sold on the loans. âAt current market prices, we donât do them and we donât evaluate them,â says Keri Findley, the CEO of Tacora Capital. With a car, she knows the depreciation curve over time. But sheâs less sure about GPUs. For now, she guesses GPUs will depreciate very, very quickly. First, the chipâs power might be leased to Microsoft, but it might need to be leased a second or third time to be worth investing in. Itâs not yet clear how much of a secondary or tertiary market there will be for old chips. Figuring out how much GPUs are worth and how long theyâll last is not as clear as it is with a house. In a corporate filing, CoreWeave notes that how much it can borrow depends on how much the GPUs are worth, and that will decrease as the GPUs have less value. The value, however, is fixed â and so if the value of the GPUs deteriorates faster than projected, CoreWeave will have to top off its loans. Some investors, including famed short-seller Michael Burry, claim that many companies are making depreciation estimates that are astonishingly wrong â by claiming GPUs will be valuable for longer than they will be in reality. According to Burry, the so-called hyperscalers (Google, Meta, Microsoft, Oracle, and Amazon) are understating depreciation of their chips by $176 billion between 2026 and 2028. Little is betting that even if some of the AI companies vanish, there will still be plenty of demand for the chips that secure the loan Burry isnât primarily concerned with neoclouds, but they are uniquely vulnerable. The hyperscalers can take a write-down without too much damage if they have to â they have other lines of business. The neoclouds canât. At minimum they will have to take write-downs; at maximum, there will be write-downs and complications on their expensive loans. They may have to provide more collateral at a time when thereâs less demand for their services, which also can command less cash than before. Trinity Capital is keeping its loans on its books; Little is betting that even if some of the AI companies vanish, there will still be plenty of demand for the chips that secure the loans. Letâs say one of the neoclouds is forced into bankruptcy because itâs gotten its chipsâ depreciation wrong, or for some other reason. Most of their customers may very well continue running their programs while banks repossess the servers and then sell them for pennies on the dollar. This is not the end of the world for the neocloudâs lenders or customers, though itâs probably annoying. That situation will, however, bite Nvidia twice: first by flooding the market with its old chips, and second by reducing its number of customers. And if something happens that makes several of these companies fail at once, the situation is worse. So how vulnerable is Nvidia? The risky business of banking on GPUs Part of whatâs fueling the AI lending boom is private credit firms, which both need to produce returns for their investors and outcompete each other. If they miscalculate how risky the GPU loans are, they may very well get hit â and the impact could ripple out to banks. That could lead to widespread chaos in the broader economy. Earlier, we talked about understanding interest rates as pricing risk. There is another, perhaps more nihilistic, way of understanding interest rates: as the simple result of supply and demand. Loans are a product like any other. Particularly for lenders that donât plan on keeping them on their own books, pricing risk may not be a primary concern â making and flipping the loans are. AI spending is exorbitant â analysts from Morgan Stanley expect $3 trillion in spending by the end of 2028 Hereâs a way of thinking about it: Letâs say a neocloud startup called WarSieve comes to my private credit agency, Problem Child Holdings, and says, âHey, thereâs a global shortage of GPUs, and we have a bunch. Can we borrow against them?â I might respond, âWell, I donât really know if thereâs a market for these and Iâm scared you might be riff raff. Letâs do a 15 percent interest rate.â WarSieve doesnât have better options, so it agr