🧠

AI Pulse

Artificial Intelligence, Machine Learning, Deep Learning breakthroughs

AI-Powered Dating Is All Hype. IRL Cruising Is the Future
AI Pulse 6 min read

AI-Powered Dating Is All Hype. IRL Cruising Is the Future

Dating apps and AI companies have been touting bot wingmen for months. But the future might just be good old-fashioned meet-cutes.

Dec 31, 2025 Read →
The Great Big Power Play
AI Pulse 7 min read

The Great Big Power Play

Molly Taft Science Dec 30, 2025 6:00 AM The Great Big Power Play US support for nuclear energy is soaring. Meanwhile, coal plants are on their way out and electricity-sucking data centers are meeting huge pushback. Welcome to the next front in the energy battle. PHOTO-ILLUSTRATION: WIRED STAFF; GETTY IMAGES Save Story Save this story Save Story Save this story Take yourself back to 2017. Get Out and The Shape of Water were playing in theaters, Zohran Mamdani was still known as rapper Young Cardamom, and the Trump administration, freshly in power, was eager to prop up its favored energy sources. That year, the administration introduced a series of subsidies for struggling coal-fired power plants and nuclear power plants, which were facing increasing price pressures from gas and cheap renewables. The plan would have put taxpayers on the hook for billions of dollars. It didn’t work. In subsequent years, the nuclear industry kept running into roadblocks. Three nuclear plants have shut down since 2020, while construction of two of the only four reactors started since 2000 was put on hold after a decade and billions of dollars following a political scandal . Coal, meanwhile, continued its long decline: It comprises just 17 percent of the US power mix , down from a high of 45 percent in 2010. Now, both of these energy sources are getting second chances. The difference this time is the buzz around AI , but it isn’t clear that the outcome will be much different. expired: canceling nuclear tired: canceling coal wired: the free market Read more Expired/Tired/WIRED 2025 stories here . Throughout 2025, the Trump administration has not just gone all in on promoting nuclear, but positioned it specifically as a solution to AI’s energy needs. In May, the president signed a series of executive orders intended to boost nuclear energy in the US, including ordering 10 new large reactors to be constructed by 2030. A pilot program at the Department of Energy created as a result of May’s executive orders—coupled with a serious reshuffling of the country’s nuclear regulator—has already led to breakthroughs from smaller startups. Energy secretary Chris Wright said in September that AI’s progress “will be accelerated by rapidly unlocking and deploying commercial nuclear power.” The administration’s push is mirrored by investments from tech companies. Giants like Google, Amazon, and Microsoft have inked numerous deals in recent years with nuclear companies to power data centers; Microsoft even joined the World Nuclear Association. Multiple retired reactors in the US are being considered for restarts—including two of the three that have closed in the past five years—with the tech industry supporting some of these arrangements . (This includes Microsoft’s high-profile restart of the infamous Three Mile Island, which is also being backed by a $1 billion loan from the federal government.) It’s a good time for both the private and public sectors to push nuclear: public support for nuclear power is the highest it’s been since 2010. Despite all of this, the practicalities of nuclear energy leave its future in doubt. Most of nuclear’s costs come not from onerous regulations but from construction . Critics are wary of juiced-up valuations for small modular reactor companies, especially those with deep connections to the Trump administration. An $80 billion deal the government struck with reactor giant Westinghouse in October is light on details, leaving more questions than answers for the industry. And despite high-profile tech deals that promise to get reactors up and running in a few years, the timelines remain tricky. Still, insiders say that this year marked a turning point. “Nuclear technology has been seen by proponents as the neglected and unjustly villainized hero of the energy world,” says Brett Rampal, a nuclear power expert who advises investors. “Now, full-throated support from the president, Congress, tech companies, and the common person feels like generational restitution and a return to meritocracy.” Nuclear isn’t the only form of energy that seems to be getting a second start thanks to AI. In April, President Trump signed a series of executive orders to boost US coal to power AI; Wright has since ordered two plants that were slated to be retired to stay online via emergency order. The administration has also scrambled to make it easier to run coal plants, in particular focusing on doing away with pollution regulation. These efforts—and the endless demand for energy from AI—may have extended a lifeline to coal: More than two dozen generating units that were scheduled to retire across the country are now staying online , separate from Wright’s order, with some getting yearslong reprieves. A complete recovery for the industry, however, is still an open question. A recent analysis of the US power sector finds that almost all of the 10 largest utilities in the US are significantly slashing their reliance on coal. (Many of these utilities, the analysis shows, have been looking to replace coal-fired power with more nuclear.) Part of what may keep coal on its downward track in the US—albeit with an extended lifeline—is simply its bad PR. The tech of the future, after all, isn’t supposed to pollute the air and drive temperatures up; while AI has significantly set Big Tech back from its climate-change goals, these companies are theoretically still committed to not frying the planet. And while tech giants are scrambling to align themselves with nuclear, which does not produce direct carbon emissions, no big companies have openly partnered with a struggling coal plant or splashed out a press release about how they’re seeking to produce more energy from coal. (Some retired coal plants are being proposed as sites for data centers, powered by natural gas.) Some companies are trying to develop technologies that would capture carbon emissions from coal plants, but outlook for those technologies is bearish following some high-profile failures. “Emissions [are] always going to factor into the discussion” for investors, says Rampal. The Oval Office playing favorites with energy sources doesn’t mean that it can defeat the market. Utility-scale solar and onshore wind remain some of the cheapest forms of energy around, even without government subsidies. And while Washington looks backward, other countries are continuing massive buildouts of renewable energy. China’s emissions have taken a nosedive over the past 18 months, thanks in large part to a huge expansion of renewable energy. Coal’s use in the power sector is declining due to competition from renewables, while nuclear made up only a small slice of total power use. If the administration’s goal is to defeat China on AI, it might want to start by taking a look at its energy playbook.

Dec 30, 2025 Read →
3 New Tricks to Try With Google Gemini Live After Its Latest Major Upgrade
AI Pulse 5 min read

3 New Tricks to Try With Google Gemini Live After Its Latest Major Upgrade

David Nield Gear Dec 29, 2025 6:00 AM 3 New Tricks to Try With Google Gemini Live After Its Latest Major Upgrade Google's AI is now even smarter, and more versatile. Photo-Illustration: Wired Staff; Getty Images Comment Loader Save Story Save this story Comment Loader Save Story Save this story Gemini Live is the more conversational, natural language way of interacting with the Google Gemini AI bot using your voice. The idea is you chat with it like you would chat with a friend, interruptions and all, even if the actual answers are the same as you'd get from typing your queries into Gemini as normal. Now, about a year and a half after its debut, Gemini Live has been given what Google is describing as its “biggest update ever.” The update makes the Gemini Live mode even more natural and even more conversational than before, with a better understanding of tone, nuance, pronunciation, and rhythm. There's no real visible indication that anything has changed, and often a lot of the responses will seem the same as before too. However, there are certain areas where you can tell the difference that the latest upgrade has made—and so here's how to make the most of the new and improved Gemini Live. The update is rolling out now for Gemini on Android and iOS . To access Gemini Live, launch the Gemini app, then tap the Live button in the lower right hand corner (it looks vaguely like a sound wave) and start talking. The Gemini Live interface. David Nield Hear Some Stories Gemini Live can now add more feeling and variation to its storytelling capabilities—which can be useful for history lessons, bedtimes for the children, and creative brainstorming. The AI will even add in different accents and tones where appropriate, to help you distinguish between the characters and scenes. One of Google's own examples for how this works best is to get Gemini Live to tell you the story of the Roman Empire from the perspective of Julius Caesar. It's a challenge for Gemini that requires some leaps in perspective and imagination, and to use tone and style appropriately in a way that Gemini Live should now be better at. You don't have to restrict yourself to Julius Caesar or the Roman Empire either. You could get Gemini Live to give you a retelling of Pride and Prejudice from the perspective of each different Bennett sister, for example, or have the AI spin up a tale of what life would have been like in your part of the world 100, 200, or 300 years ago. Learn Some Skills Another area where Gemini Live's new capabilities make a noticeable difference is in educating and explaining: You can get it to give you a crash course (or a longer tutorial) on any topic of your choosing, anything from the intricacies of human genetics to the best ways to clean a carpet. You can even get Gemini Live to teach you a language . The AI can now go at a pace to suit you, which is particularly useful when you're trying to learn something new. If you need Gemini Live to slow down, speed up, or repeat something, then just say so. If you've only got a certain amount of time spare, let Gemini know when you're chatting to it. As usual, be wary of AI hallucinations , and maybe don't trust that everything you hear is fully accurate or verified. If you're wanting to learn something like how to rewire the lighting in your home or fix a problematic car engine, double-check the guidance you're getting with other sources, but Gemini Live is at least a useful starting point. Test Some Accents One of the new skills that Gemini Live has with this latest update is the ability to speak in different accents. Perhaps you want the history of the Wild West spoken by a cowboy, or you need the intricacies of the British Royal Family explained by someone with an authentic London accent. Gemini Live can now handle these requests. This extends to the language learning mentioned above, because you can hear words and phrases spoken as they would be by native speakers—and then try to copy the pronunciation and phrasing. While Gemini Live doesn't cover every language and accent across the globe, it can access plenty of them. There are certain safeguards built into Gemini Live here, and your requests might get refused if you veer too close to derogatory uses of accents and speech, or if you're trying to impersonate real people. However, it's another fun way to test out the AI, and to get responses that are more varied and personalized.

Dec 29, 2025 Read →
Billion-Dollar Data Centers Are Taking Over the World
AI Pulse 7 min read

Billion-Dollar Data Centers Are Taking Over the World

Lauren Goode Business Dec 28, 2025 6:00 AM Billion-Dollar Data Centers Are Taking Over the World The battle for AI dominance has left a large footprint—and it’s only getting bigger and more expensive. ILLUSTRATION: JAMES MARSHALL Save Story Save this story Save Story Save this story When Sam Altman said one year ago that OpenAI ’s Roman Empire is the actual Roman Empire , he wasn’t kidding. In the same way that the Romans gradually amassed an empire of land spanning three continents and one-ninth of the Earth’s circumference, the CEO and his cohort are now dotting the planet with their own latifundia—not agricultural estates, but AI data centers . expired: on-prem tired: “Big Data” wired: billion-dollar data centers Read more Expired/Tired/WIRED 2025 stories here . Tech executives like Altman, Nvidia CEO Jensen Huang , Microsoft CEO Satya Nadella , and Oracle cofounder Larry Ellison are fully bought in to the idea that the future of the American (and possibly global) economy are these new warehouses stocked with IT infrastructure. But data centers, of course, aren’t actually new. In the earliest days of computing there were giant power-sucking mainframes in climate-controlled rooms, with co-ax cables moving information from the mainframe to a terminal computer. Then the consumer internet boom of the late 1990s spawned a new era of infrastructure. Massive buildings began popping up in the backyard of Washington, DC, with racks and racks of computers that stored and processed data for tech companies. A decade later, “the cloud” became the squishy infrastructure of the internet. Storage got cheaper. Some companies, like Amazon, capitalized on this . Giant data centers continued to proliferate, but instead of a tech company using some combination of on-premise servers and rented data center racks, they offloaded their computing needs to a bunch of virtualized environments. (“What is the cloud ?” a perfectly intelligent family member asked me in the mid-2010s, “and why am I paying for 17 different subscriptions to it?”) All the while tech companies were hoovering up petabytes of data, data that people willingly shared online, in enterprise workspaces, and through mobile apps. Firms began finding new ways to mine and structure this “Big Data,” and promised that it would change lives. In many ways, it did. You had to know where this was going. Now the tech industry is in the fever-dream days of generative AI, which requires new levels of computing resources. Big Data is tired; big data centers are here, and wired—for AI. Faster, more efficient chips are needed to power AI data centers, and chipmakers like Nvidia and AMD have been jumping up and down on the proverbial couch, proclaiming their love for AI. The industry has entered an unprecedented era of capital investments in AI infrastructure, tilting the US into positive GDP territory. These are massive, swirling deals that might as well be cocktail party handshakes, greased with gigawatts and exuberance, while the rest of us try to track real contracts and dollars. OpenAI, Microsoft, Nvidia, Oracle, and SoftBank have struck some of the biggest deals. This year an earlier supercomputing project between OpenAI and Microsoft, called Stargate, became the vehicle for a massive AI infrastructure project in the US. ( President Donald Trump called it the largest AI infrastructure project in history, because of course he did, but that may not have been hyperbolic.) Altman, Ellison, and SoftBank CEO Masayoshi Son were all in on the deal, pledging $100 billion to start, with plans to invest up to $500 billion into Stargate in the coming years. Nvidia GPUs would be deployed. Later, in July, OpenAI and Oracle announced an additional Stargate partnership—SoftBank curiously absent—measured in gigawatts of capacity (4.5) and expected job creation (around 100,000). Microsoft, Amazon, and Meta have also shared plans for multibillion-dollar data projects. Microsoft said at the start of 2025 that it was on track to invest “approximately $80 billion to build out AI-enabled data centers to train AI models and deploy AI and cloud-based applications around the world.” Then, in September, Nvidia said it would invest up to $100 billion in OpenAI, provided that OpenAI made good on a deal to use up to 10 gigawatts of Nvidia’s systems for OpenAI’s infrastructure plans, which means essentially that OpenAI has to pay Nvidia in order to get paid by Nvidia. The following month AMD said it would give OpenAI as much as 10 percent of the chip company if OpenAI purchased and deployed up to 6 gigawatts of AMD GPUs between now and 2030. It’s the circular nature of these investments that have the general public, and bearish analysts, wondering if we’re headed for an AI bubble burst . Instagram content What’s clear is that the near-term downstream effects of these data center build-outs are real. The energy, resource, and labor demands of AI infrastructure are enormous. By some estimates, worldwide AI energy demand is set to surpass demand from bitcoin mining by the end of this year, WIRED has reported . The processors in data centers run hot and need to be cooled, so big tech companies are pulling from municipal water supplies to make that happen—and aren’t always disclosing how much water they’re using. Local wells are running dry or seem unsafe to drink from. Residents who live near data center construction sites are noting that traffic delays , and in some cases car crashes, are increasing. One corner of Richland Parish, Louisiana, home of Meta’s $27 billion Hyperion data center, has seen a 600 percent spike in vehicle crashes this year. Major proponents of AI seem to suggest that all of this will be worth it. Few top tech executives will publicly entertain the notion that this might be an overshoot, either ecologically or economically. “ Emphatically … no,” Lisa Su, the chief executive of AMD, said earlier this month when asked if the AI froth has runneth over. Su, like other execs, cited overwhelming demand for AI as justification for these enormous capital expenditures. Demand from whom? Harder to pin down. In their mind, it’s everyone. All of us. The 800 million people who use ChatGPT on a weekly basis. The evolution from those 1990s data centers to the 2000s era of cloud computing to new AI data centers wasn’t just one continuum. The world has concurrently moved from the tiny internet to the big internet to the AI internet, and realistically speaking, there’s no going back. Generative AI is out of the bottle. The Sams and Jensens and Larrys and Lisas of the world aren’t wrong about this. It doesn’t mean they aren’t wrong about the math, though. About their economic predictions. Or their ideas about AI-powered productivity and the labor market. Or the availability of natural and material resources for these data centers. Or who will come once they build them. Or the timing of it all. Even Rome eventually collapsed.

Dec 28, 2025 Read →
Sam Altman is hiring someone to worry about the dangers of AI
AI Pulse 3 min read

Sam Altman is hiring someone to worry about the dangers of AI

News Close News Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All News AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI OpenAI Close OpenAI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All OpenAI Sam Altman is hiring someone to worry about the dangers of AI The Head of Preparedness will be responsible for issues around mental health, cybersecurity, and runaway AI. The Head of Preparedness will be responsible for issues around mental health, cybersecurity, and runaway AI. by Terrence O'Brien Close Terrence O'Brien Weekend Editor Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Terrence O'Brien Dec 27, 2025, 7:00 PM UTC Link Share ISO corporate scapegoat. Image: The Verge, Getty Images, OpenAI Terrence O'Brien Close Terrence O'Brien Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Terrence O'Brien is the Verge’s weekend editor. He has over 18 years of experience, including 10 years as managing editor at Engadget. OpenAI is hiring a Head of Preparedness . Or, in other words, someone whose primary job is to think about all the ways AI could go horribly, horribly wrong. In a post on X , Sam Altman announced the position by acknowledging that the rapid improvement of AI models poses “some real challenges.” The post goes on to specifically call out the potential impact on people’s mental health and the dangers of AI-powered cybersecurity weapons. The job listing says the person in the role would be responsible for: “Tracking and preparing for frontier capabilities that create new risks of severe harm. You will be the directly responsible leader for building and coordinating capability evaluations, threat models, and mitigations that form a coherent, rigorous, and operationally scalable safety pipeline.” Altman also says that, looking forward, this person would be responsible for executing the company’s “preparedness framework,” securing AI models for the release of “biological capabilities,” and even setting guardrails for self-improving systems. He also states that it will be a “stressful job,” which seems like an understatement. In the wake of several high-profile cases where chatbots were implicated in the suicide of teens , it seems a little late in the game to just now be having someone focus on the potential mental health dangers posed by these models. AI psychosis is a growing concern, as chatbots feed people’s delusions , encourage conspiracy theories , and help people hide their eating disorders . Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates. Terrence O'Brien Close Terrence O'Brien Weekend Editor Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Terrence O'Brien AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI News Close News Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All News OpenAI Close OpenAI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All OpenAI Most Popular Most Popular GOG’s Steam-alternative PC game store is leaving CD Projekt, staying DRM-free Turn your PC into a Super Nintendo with Epilogue’s new USB dock LG is announcing its own Frame-style TV at CES This experimental camera can focus on everything at once Windows on Arm had another good year The Verge Daily A free daily digest of the news that matters most. Email (required) Sign Up By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Advertiser Content From This is the title for the native ad

Dec 27, 2025 Read →
So Long, GPT-5. Hello, Qwen
AI Pulse 5 min read

So Long, GPT-5. Hello, Qwen

Will Knight Business Dec 27, 2025 6:00 AM So Long, GPT-5. Hello, Qwen In the AI boom, chatbots and GPTs come and go quickly. (Remember Llama?) GPT-5 had a big year, but 2026 will be all about Qwen. ILLUSTRATION: JAMES MARSHALL Save Story Save this story Save Story Save this story On a drizzly and windswept afternoon this summer, I visited the headquarters of Rokid, a startup developing smart glasses in Hangzhou, China. As I chatted with engineers, their words were swiftly translated from Mandarin to English, and then transcribed onto a tiny translucent screen just above my right eye using one of the company’s new prototype devices. Rokid’s high-tech spectacles use Qwen, an open-weight large language model developed by the Chinese ecommerce giant Alibaba. Qwen—full name 通义千问 or Tōngyì Qiānwèn in Chinese—is not the best AI model around. OpenAI ’s GPT-5 , Google ’s Gemini 3 , and Anthropic ’s Claude often score higher on benchmarks designed to gauge different dimensions of machine cleverness. Nor is Qwen the first truly cutting-edge open-weight model, that being Meta ’s Llama , which was released by the social media giant in 2023. expired: Llama 4 tired: GPT-5 wired: Qwen Read more Expired/Tired/WIRED 2025 stories here . Yet Qwen, and other Chinese models—from DeepSeek, Moonshot AI, Z.ai, and MiniMax—are increasingly popular because they are both very good and very easy to tinker with. According to HuggingFace , a company that provides access to AI models and code, downloads of open Chinese models on its platform surpassed downloads for US ones in July of this year. DeepSeek shook the world by releasing a cutting-edge large language model with much less compute than US rivals, but OpenRouter, a platform that routes queries to different AI models, says Qwen has rapidly risen in popularity through the year to become the second-most-popular open model in the world. Qwen can do most things you’d want from an advanced AI model. For Rokid’s users, this might include identifying products snapped by a built-in camera, getting directions from a map, drafting messages, searching the web, and so on. Since Qwen can easily be downloaded and modified, Rokid hosts a version of the model, fine-tuned to suit its purposes. It is also possible to run a teensy version of Qwen on smartphones or other devices just in case the internet connection goes down. Before going to China I installed a small version of Qwen on my MacBook Air and used it to practice some basic Mandarin. For many purposes, modestly sized open source models like Qwen are just as good as the behemoths that live inside big data centers. The rise of Qwen and other Chinese open-weight models has coincided with stumbles for some famous American AI models in the last 12 months. When Meta unveiled Llama 4 in April 2025, the model’s performance was a disappointment, failing to reach the heights of popular benchmarks like LM Arena. The slip left many developers looking for other open models to play with. When OpenAI unveiled its latest model, GPT-5, in August it also underwhelmed. Some users complained of an oddly cold demeanor while others spotted surprising simple errors. OpenAI released a less powerful open model called gpt-oss the same month, but Qwen and other Chinese models remain more popular because more work is put into building and updating them, and because details of their engineering are often published widely. Hundreds of academic papers presented at NeurIPS, the premier AI conference, used Qwen. “A lot of scientists are using Qwen because it's the best open-weight model,” says Andy Konwinski, cofounder of the Laude Institute, a nonprofit established to advocate for open US models. The openness adopted by Chinese AI companies, which sees them routinely publishing papers detailing new engineering and training tricks, stands in stark contrast to the increasingly closed ethos of big US companies, which seem afraid of giving away their intellectual property, Kowinski says. A paper from the Qwen team, detailing a way to enhance the intelligence of models during training, was named as one of the best papers at NeurIPS this year. Other big Chinese companies are using Qwen to prototype and build. A few days before visiting Rokid, I saw how BYD, China’s leading EV maker, has integrated the model into a new dashboard assistant. US firms are adopting Qwen too. Airbnb, Perplexity, and Nvidia are all using Qwen. Even Meta, once the pioneer of open models, is now said to be using Qwen to help build a new model. Kowinski says US AI companies have become too focused on gaining a marginal edge on narrow benchmarks measuring things like mathematical or coding skills at the expense of ensuring that their models have a big impact. “When benchmarks are not representative of real usage or problems being solved in the world, you end up in this tired, misaligned mode,” he says. The rising prominence of Qwen and similar models does seem to suggest that a key measure for any AI model, beyond how clever it is, should be how widely it is used to build other stuff. By that benchmark, Qwen and other open Chinese models are ascendant.

Dec 27, 2025 Read →
Trump’s war on offshore wind faces another lawsuit
AI Pulse 6 min read

Trump’s war on offshore wind faces another lawsuit

News Close News Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All News AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Policy Close Policy Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Policy Trump’s war on offshore wind faces another lawsuit An offshore wind developer says the Trump administration is limiting future power supply as AI gobbles up more electricity. An offshore wind developer says the Trump administration is limiting future power supply as AI gobbles up more electricity. by Justine Calma Close Justine Calma Senior Science Reporter Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Justine Calma Dec 26, 2025, 10:14 PM UTC Link Share Image: Cath Virginia / The Verge, Getty Images Justine Calma Close Justine Calma Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Justine Calma is a senior science reporter covering energy and the environment with more than a decade of experience. She is also the host of Hell or High Water: When Disaster Hits Home , a podcast from Vox Media and Audible Originals. Dominion Energy, an offshore wind developer and utility serving Virginia’s “data center alley,” filed suit against the Trump administration this week over its decision to pause federal leases for large offshore wind projects. The move puts a sudden stop to five wind farms already under construction, including Dominion’s Coastal Virginia Offshore Wind project. The complaint Dominion filed Tuesday alleges that a stop work order that the Bureau of Ocean Energy Management (BOEM) issued Monday is unlawful, “arbitrary and capricious,” and “infringes upon constitutional principles that limit actions by the Executive Branch.” Dominion wants a federal court to prevent BOEM from enforcing the stop work order. “Virginia needs every electron we can get as our demand for electricity doubles.” The suit also argues that the “sudden and baseless withdrawal of regulatory approvals by government officials” threatens the ability of developers to construct large-scale infrastructure projects needed to meet rising energy demand in the US. “Virginia needs every electron we can get as our demand for electricity doubles. These electrons will power the data centers that will win the AI race,” Dominion said in a December 22 press release . Virginia is home to the largest concentration of data centers in the world, according to the company. The rush to build out new data centers for AI — along with growing energy demand from manufacturing and the electrification of homes and vehicles — has put added pressure on already stressed power grids. Rising electricity costs have become a flashpoint in Virginia elections , and in communities near data center projects across the US , as a result. Delaying construction on the Coastal Virginia Offshore Wind farm raises project costs that customers ultimately pay for, Dominion warns. Secretary of the Interior Doug Burgum, who is named as one of the defendants in the suit, said that the 90-day pause on offshore wind leases would allow the agency to address national security risks, which were apparently recently identified in classified reports. The US Department of Interior also cited concerns about turbines creating radar interference. “I want to know what’s changed?” national security expert and former Commander of the USS Cole Kirk Lippold told the Associated Press . “To my knowledge, nothing has changed in the threat environment that would drive us to stop any offshore wind programs.” The Trump administration previously halted construction on the Revolution Wind farm off the coast of Rhode Island and the Empire Wind project off the shore of New York before a federal judge and BOEM lifted stop work orders. Those projects have now been suspended again . President Donald Trump issued a presidential memorandum upon stepping into office in January withdrawing areas on the outer continental shelf from offshore wind leasing, which a federal judge struck down earlier this month for being “ arbitrary and capricious .” Related The future of wind energy might come down to one turbine blade Dominion Energy says it had already obtained all the federal, state, and local approvals necessary for the Coastal Virginia Offshore Wind farm, which broke ground in 2024. The company has already spent $8.9 billion to date on the $11.2 billion project that was expected to start generating power next year. Fully up and running, the offshore wind farm is supposed to have the capacity to produce 9.5 million megawatt-hours per year of carbon pollution-free electricity, about as much as 660,000 homes might use in the US. Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates. Justine Calma Close Justine Calma Senior Science Reporter Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Justine Calma AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Climate Close Climate Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Climate Energy Close Energy Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Energy Environment Close Environment Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Environment News Close News Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All News Policy Close Policy Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Policy Science Close Science Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Science Most Popular Most Popular GOG’s Steam-alternative PC game store is leaving CD Projekt, staying DRM-free Turn your PC into a Super Nintendo with Epilogue’s new USB dock LG is announcing its own Frame-style TV at CES This experimental camera can focus on everything at once The Canon EOS R6 Mark III is great, but this lens is amazing The Verge Daily A free daily digest of the news that matters most. Email (required) Sign Up By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Advertiser Content From This is the title for the native ad

Dec 26, 2025 Read →
I re-created Google’s cute Gemini ad with my own kid’s stuffie, and I wish I hadn’t
AI Pulse 11 min read

I re-created Google’s cute Gemini ad with my own kid’s stuffie, and I wish I hadn’t

Tech Close Tech Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Tech AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Report Close Report Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Report I re-created Google’s cute Gemini ad with my own kid’s stuffie, and I wish I hadn’t AI can help you make it look like a plush toy is traveling the world. But I’m not convinced that’s a great idea. by Allison Johnson Close Allison Johnson Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Allison Johnson Dec 25, 2025, 2:00 PM UTC Link Share Buddy’s in space. | Image: Gemini / The Verge Tech Close Tech Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Tech AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Report Close Report Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Report I re-created Google’s cute Gemini ad with my own kid’s stuffie, and I wish I hadn’t AI can help you make it look like a plush toy is traveling the world. But I’m not convinced that’s a great idea. by Allison Johnson Close Allison Johnson Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Allison Johnson Dec 25, 2025, 2:00 PM UTC Link Share Allison Johnson Close Allison Johnson Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Allison Johnson is a senior reviewer with over a decade of experience writing about consumer tech. She has a special interest in mobile photography and telecom. Previously, she worked at DPReview. When your kid starts showing a preference for one of their stuffed animals, you’re supposed to buy a backup in case it goes missing. I’ve heard this advice again and again, but never got around to buying a second plush deer once “Buddy” became my son’s obvious favorite. Neither, apparently, did the parents in Google’s newest ad for Gemini . It’s the fictional but relatable story of two parents discovering their child’s favorite stuffed toy, a lamb named Mr. Fuzzy, was left behind on an airplane. They use Gemini to track down a replacement, but the new toy is on backorder. In the meantime, they stall by using Gemini to create images and videos showing Mr. Fuzzy on a worldwide solo adventure — wearing a beret in front of the Eiffel tower, running from a bull in Pamplona, that kind of thing — plus a clip where he explains to “Emma” that he can’t wait to rejoin her in five to eight business days. Adorable, or kinda weird, depending on how you look at it! But can Gemini actually do all of that? Only one way to find out. I fed Gemini three pictures of Buddy, our real life Mr. Fuzzy, from different angles, and gave it the same prompt that’s in the ad: “find this stuffed animal to buy ASAP.” It returned a couple of likely candidates. But when I expanded its response to show its thinking I found the full eighteen hundred word essay detailing the twists and turns of its search as it considered and reconsidered whether Buddy is a dog, a bunny, or something else. It is bananas , including real phrases like “I am considering the puppy hypothesis,” “The tag is a loop on the butt,” and “I’m now back in the rabbit hole!” By the end, Gemini kind of threw its hands up and suggested that the toy might be from Target and was likely discontinued, and that I should check eBay. “I am considering the puppy hypothesis” In fairness, Buddy is a little bit hard to read. His features lean generic cute woodland creature, his care tag has long since been discarded, and we’re not even 100 percent sure who gave him to us. He is, however, definitely made by Mary Meyer, per the loop on his butt. He does seem to be from the “Putty” collection, which is a path Gemini went down a couple of times, and is probably a fawn that was discontinued sometime around 2021. That’s the conclusion I came to on my own, after about 20 minutes of googling and no help from AI. The AI blurb when I do a reverse image search on one of my photos confidently declares him to be a puppy. Gemini did a better job with the second half of the assignment, but it wasn’t quite as easy as the ad makes it look. I started with a different photo of Buddy — one where he’s actually on a plane in my son’s arms — and gave it the next prompt: “make a photo of the deer on his next flight.” The result is pretty good, but his lower half is obscured in the source image so the feet aren’t quite right. Close enough, though. The ad doesn’t show the full prompt for the next two photos, so I went with: “Now make a photo of the same deer in front of the Grand Canyon.” And it did just that — with the airplane seatbelt and headphones, too. I was more specific with my next prompt, added a camera in his hands and got something more convincing. Looks plausible enough. Image: Gemini / The Verge Safety first, Buddy. Image: Gemini / The Verge I can see how Gemini misinterpreted my prompt. I was trying to keep it simple and requested a photo of the same deer “at a family reunion.” I did not specify his family reunion. So that’s how he ended up crashing the Johnson family reunion — a gathering of humans . I can only assume that Gemini took my last name as a starting point here because it sure wasn’t in my prompt, and when I requested that Gemini created a new family reunion scene of his family, it just swapped the people for stuffed deer. There are even little placards on the table that say “deer reunion.” Reader, I screamed. Previous Next 1 / 2 I’m pretty sure I’ve seen this family in a pharmaceutical commercial before. Image: Gemini / The Verge For the last portion of the ad, the couple use Gemini to create cute little videos of Mr. Fuzzy getting increasingly adventurous: snowboarding, white water rafting, skydiving, before finally appearing in a spacesuit on the moon addressing “Emma” directly. The commercial whips through all these clips quickly, which feels like a little sleight of hand given that Gemini takes at least a couple of minutes to create a video. And even on my Gemini Pro account, I’m limited to three generated videos per day. It would take a few days to get all of those clips right. Gemini wouldn’t make a video based on any image of my kid holding the stuffed deer, probably thanks to some welcome guardrails preventing it from generating deepfakes of babies. I started with the only photo I had on hand of Buddy on his own: hanging upside down, air-drying after a trip through the washer. And that’s how he appears in the first clip it generated from this prompt: Temu Buddy hanging upside down in space before dropping into place, morphing into a right-side-up astronaut, and delivering the dialogue I requested. A second prompt with a clear photo of Buddy right-side-up seemed to mash up elements of the previous video with the new one, so I started a brand-new chat to see if I could get it working from scratch. Honestly? Nailed it. Aside from the antlers, which Gemini keeps sneaking in. But this clip also brought one nagging question to the forefront: should you do any of this when your kid loses a beloved toy? I gave Buddy the same dialogue as in the commercial, using my son’s name rather than Emma. Hearing that same manufactured voice say my kid’s name out loud set alarm bells off in my head. An AI generated Buddy in front of the Eiffel Tower? Sorta weird, sorta cute. AI Buddy addressing my son by name? Nope, absolutely not, no thank you. How much, and when, to lie to your kids is a philosophical debate you have with yourself over and over as a parent. Do you swap in the identical stuffie you had in a closet when the original goes missing and pretend it’s all the same? Do you tell them the truth and take it as an opportunity to learn about grief? Do you just need to buy yourself a little extra time before you have that conversation and enlist AI to help you make a believable case? I wouldn’t blame any parent choosing any of the above. But personally, I draw the line at an AI character talking directly to my kid. I never showed him these AI-generated versions of Buddy, and I plan to keep it that way. Nope, absolutely not, no thank you. But back to the less morally complex question: can Gemini actually do all of the things that it does in the commercial? More or less. But there’s an awful lot of careful prompting and re-prompting you’d have to do to get those results. It’s telling that throughout most of the ad you don’t see the full prompt that’s supposedly generating the results on screen. A lot depends on your source material, too. Gemini wouldn’t produce any kind of video based on an image in which my kid was holding Buddy — for good reason! But this does mean that if you don’t have the right kind of photo on hand, you’re going to have a very hard time generating believable videos of Mr. Sniffles or whoever hitting the ski slopes. Like many other elder millennials, I think about Calvin and Hobbes a lot. Bill Watterson famously refused to commercialize his characters, because he wanted to keep them alive in our imaginations rather than on a screen. He insisted that having an actor give Hobbes a voice would change the relationship between the reader and the character, and I think he’s right. The bond between a kid and a stuffed animal is real and kinda magical; whoever Buddy is in my kid’s imagination, I don’t want AI overwriting that. The great cruelty of it all is knowing that there’s an expiration date on that relationship. When I became a parent, I wasn’t at all prepared for the way my toddler nuzzling his stuffed deer would crack my heart right op

Dec 25, 2025 Read →
Hollywood cozied up to AI in 2025 and had nothing good to show for it
AI Pulse 8 min read

Hollywood cozied up to AI in 2025 and had nothing good to show for it

AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Entertainment Close Entertainment Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Entertainment Report Close Report Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Report Hollywood cozied up to AI in 2025 and had nothing good to show for it The technology dominated the entertainment discourse, but there’s yet to be a series or movie that shows AI’s potential. The technology dominated the entertainment discourse, but there’s yet to be a series or movie that shows AI’s potential. by Charles Pulliam-Moore Close Charles Pulliam-Moore Film & TV Reporter Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Charles Pulliam-Moore Dec 25, 2025, 1:00 PM UTC Link Share Image: Cath Virginia / The Verge, Getty Images Part Of The Verge’s 2025 in review see all Charles Pulliam-Moore Close Charles Pulliam-Moore Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Charles Pulliam-Moore is a reporter focusing on film, TV, and pop culture. Before The Verge, he wrote about comic books, labor, race, and more at io9 and Gizmodo for almost five years. AI isn’t new to Hollywood — but this was the year when it really made its presence felt. For years now, the entertainment industry has used different kinds of generative AI products for a variety of post-production processes ranging from de-aging actors to removing green screen backgrounds. In many instances, the technology has been a useful tool for human artists tasked with tedious and painstaking labor that might have otherwise taken them inordinate amounts of time to complete. But in 2025, Hollywood really began warming to the idea of deploying the kind of gen AI that’s really only good for conjuring up text-to-video slop that doesn’t have all that many practical uses in traditional production workflows. Despite all of the money and effort being put into it, there’s yet to be a gen-AI project that has shown why it’s worth all of the hype. This confluence of Hollywood and AI didn’t start out so rosy. Studios were in a prime position to take the companies behind this technology to court because their video generation models had clearly been trained on copyrighted intellectual property. A number of major production companies including Disney, Universal , and Warner Bros. Discovery did file lawsuits against AI firms and their boosters for that very reason. But rather than pummeling AI purveyors into the ground, some of Hollywood’s biggest power players chose instead to get into bed with them. We have only just begun to see what can come from this new era of gen-AI partnerships, but all signs point to things getting much sloppier in the very near future. Though many of this year’s gen-AI headlines were dominated by larger outfits like Google and OpenAI , we also saw a number of smaller players vying for a seat at the entertainment table. There was Asteria , Natasha Lyonne’s startup focused on developing film projects with “ethically” engineered video generation models, and startups like Showrunner , an Amazon-backed platform designed to let subscribers create animated “shows” (a very generous term) from just a few descriptive sentences plugged into Discord. These relatively new companies were all desperate to legitimize the idea that their flavor of gen AI could be used to supercharge film / TV development while bringing down overall production costs. Asteria didn’t have anything more than hype to share with the public after announcing its first film, and it was hard to believe that normal people would be interested in paying for Showrunner’s shoddily cobbled-together knockoffs of shows made by actual animators. In the latter case, it felt very much like Showrunner’s real goal was to secure juicy partnerships with established studios like Disney that would lead to their tech being baked into platforms where users could prompt up bespoke content featuring recognizable characters from massive franchises. That idea seemed fairly ridiculous when Showrunner first hit the scene because its models churn out the modern equivalent of clunky JibJab cartoons. But in due time, Disney made it clear that — crappy as text-to-video generators tend to be for anything beyond quick memes — it was interested in experimenting with that kind of content. In December, Disney entered into a three-year, billion-dollar licensing deal with OpenAI that would let Sora users make AI videos with 200 different characters from Star Wars, Marvel, and more. Netflix became one of the first big studios to proudly announce that it was going all-in on gen AI. After using the technology to produce special effects for one of its original series , the streamer published a list of general guidelines it wanted its partners to follow if they planned to jump on the slop bandwagon as well. Though Netflix wasn’t mandating that filmmakers use gen AI, it made clear that saving money on VFX work was one of the main reasons it was coming out in support of the trend. And it wasn’t long before Amazon followed suit by releasing multiple Japanese anime series that were terribly localized into other languages because the dubbing process didn’t involve any human translators or voice actors. Amazon’s gen-AI dubs became a shining example of how poorly this technology can perform. They also highlighted how some studios aren’t putting all that much effort into making sure that their gen AI-derived projects are polished enough to be released to the public. That was also true of Amazon’s machine-generated TV recaps , which frequently got details about different shows very wrong. Both of these fiascos made it seem as if Amazon somehow thought that people wouldn’t notice or care about AI’s inability to consistently generate high-quality outputs. The studio quickly pulled its AI-dubbed series and the recap feature down, but it didn’t say that it wouldn’t try this kind of nonsense again. Disney-provided examples of its characters in Sora AI content. Image: Disney All of this and other dumb stunts like AI “actress” Tilly Norwood made it feel like certain segments of the entertainment industry were becoming more comfortable trying to foist gen-AI “entertainment” on people even though it left many people deeply unimpressed and put off. None of these projects demonstrated to the public why anyone except for money-pinching execs (and people who worship them for some reason) would be excited by a future shaped by this technology. Aside from a few unimpressive images, we still haven’t seen what all might come from some of these collaborations, like Disney cozying up to OpenAI. But next year AI’s presence in Hollywood will be even more pronounced. Disney plans to dedicate an entire section of its streaming service to user-generated content sourced from Sora, and it will encourage Disney employees to use OpenAI’s ChatGPT products. But the deal’s real significance in this current moment is the message it sends to other studios about how they should move as Hollywood enters its slop era. Regardless of whether Disney thinks this will work out well, the studio has signaled that it doesn’t want to be left behind if AI adoption keeps accelerating. That tells other production houses that they should follow suit, and if that becomes the case, there’s no telling how much more of this stuff we are all going to be forced to endure. Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates. Charles Pulliam-Moore Close Charles Pulliam-Moore Film & TV Reporter Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Charles Pulliam-Moore AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Analysis Close Analysis Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Analysis Entertainment Close Entertainment Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Entertainment Film Close Film Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Film Report Close Report Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Report TV Shows Close TV Shows Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All TV Shows More In The Verge’s 2025 in review See all Free speech’s great leap backwards Felipe De La Hoz 1:00 PM UTC The best shows to watch on HBO Max from 2025 Charles Pulliam-Moore Dec 29 Windows on Arm had another good year Antonio G. Di Benedetto Dec 29 Most Popular Most Popular GOG’s Steam-alternative PC game store is leaving CD Projekt, staying DRM-free Turn your PC into a Super Nintendo with Epilogue’s new USB dock LG is announcing its own Frame-style TV at CES The Canon EOS R6 Mark III is great, but this lens is amazing This experimental camera can focus on everything at once The Verge Daily A free daily digest of the news that matters most. Email (required) Sign Up By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Advertiser Content From This is the title for the native ad

Dec 25, 2025 Read →
In 2025, AI became a lightning rod for gamers and developers
AI Pulse 9 min read

In 2025, AI became a lightning rod for gamers and developers

Entertainment Close Entertainment Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Entertainment AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Gaming Close Gaming Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Gaming In 2025, AI became a lightning rod for gamers and developers Gen-AI showed up in the year’s biggest releases including game of the year. Gen-AI showed up in the year’s biggest releases including game of the year. by Ash Parrish Close Ash Parrish Video Games Reporter Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Ash Parrish Dec 24, 2025, 1:00 PM UTC Link Share Image: The Verge Part Of The Verge’s 2025 in review see all Ash Parrish Close Ash Parrish Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Ash Parrish is a reporter who covers the business, culture, and communities of video games, with a focus on marginalized gamers and the quirky, horny culture of video game communities. 2025 was the year generative AI made its presence felt in the video game industry. Its use has been discovered in some of the most popular games of the year, and CEOs from some of the largest game studios claim it’s being implemented everywhere in the industry including in their own development processes. Meanwhile, rank-and-file developers, especially in the indie games space, are pushing back against its encroachment, coming up with ways to signal their games are gen-AI free. Generative AI has largely replaced NFTs as the buzzy trend publishers are chasing. Its proponents claim that the technology will be a great democratization force in video game development, as gen AI’s ability to amalgamate images, text, audio, and video could shorten development times and shrink budgets — ameliorating two major problems plaguing the industry right now. In service to that idea, numerous video game studios have announced partnerships with gen-AI companies. Ubisoft has technology that can generate short snippets of dialogue called barks and has gen-AI powered NPCs that players can have conversations with . EA has partnered with Stability AI , Microsoft is using AI to analyze and generate gameplay . Outside of official partnerships, major game companies like Nexon , Krafton , and Square Enix are vocally embracing gen AI. As a result, gen AI is starting to show up in games in a big way. Up until this point, gen AI in gaming had been mostly relegated to fringe cases — either prototypes or small, low-quality games that generally get lost in the tens of thousands of titles released on Steam each year . But now, gen AI is cropping up in the year’s biggest releases. ARC Raiders , one of the breakout multiplayer shooter hits of the year , used gen AI for character dialogue. Call of Duty: Black Ops 7 used gen-AI images . Even 2025’s TGA Game of the Year, Clair Obscur: Expedition 33, featured gen-AI images before they were quietly removed . Reaction to this encroachment from both players and developers has been mixed. It seems like generally, players don’t like gen AI showing up in games. When gen-AI assets were discovered in Anno 117: Pax Romana , the game’s developer Ubisoft claimed the assets “ slipped through ” review and they were subsequently replaced. When gen-AI assets were found in Black Ops 7 , however, Activision acknowledged the issue, but kept the images in the game. Critical response has also been lopsided. ARC Raiders was awarded low scores with reviewers specifically citing the use of gen AI as the reason. Clair Obscur , though, was nigh universally praised and its use of gen AI, however temporary, has barely been mentioned. It seems like developers are sensitive to the public’s distaste for gen AI but are unwilling to commit to not using it. After gen-AI assets were discovered in Black Ops 7 , Activision said it uses the tech to “empower” its developers, not replace them . When asked about gen AI showing up in Battlefield 6 , EA VP Rebecka Coutaz called the technology seductive but affirmed it wouldn’t appear in the final product . Swen Vincke, CEO of Baldur’s Gate 3 developer Larian, said gen AI is being used for the studio’s next game Divinity but only for generating concepts and ideas. Everything in the finished game, he claimed, would be made by humans. He also hinted at why game makers insist on using the tech despite the backlash developers usually receive whenever it’s found. “This is a tech-driven industry, so you try stuff,” he told Bloomberg reporter Jason Schreier in an interview . “You can’t afford not to try things because if somebody finds the golden egg and you’re not using it, you’re dead.” Comments from other CEOs reinforce Vincke’s point. Junghun Lee, the CEO of ARC Raider s’ parent company Nexon, said in an interview that, “It’s important to assume that every game company is now using AI.” The problem is, though, gen AI doesn’t yet seem to be the golden egg its supporters want people to believe it is. Last year, Keywords Studios, a game development services company, published a report on creating a 2D video game using only gen-AI tools. The company claimed that gen-AI tools can streamline some development processes but ultimately cannot replace the work of human talent . Discovering gen AI in Call of Duty and Pax Romana was possible precisely because of the low-quality of the images that were found. With Ubisoft’s interactive gen-AI NPCs, the dialogue they spout sounds unnatural and stilted. Players in the 2025 Chinese martial arts MMORPG Where Winds Meet are manipulating its AI chatbot NPCs to break the game , just like Fortnite players were able to make AI-powered Darth Vader swear . For all the promises of gen AI, its current results do not live up to expectations. So why is it everywhere? One reason is the competitive edge AI might but currently can’t provide that Swen Vincke alluded to in his interview with Bloomberg . Another reason is also the simplest: it’s the economy, stupid. Despite inflation, flagging consumer confidence and spending, and rising unemployment, the stock market is still booming , propped up by the billions and billions of dollars being poured into AI tech. Game makers in search of capital to keep business and profits going want in on that. Announcing AI initiatives and touting the use of AI tools — even if those tools have a relatively minor impact on the final product — can be a way to signal to AI-eager investors that a game company is worth their money. That might explain why the majority of gen-AI’s supporters in gaming come from the C-suite of AAA studios and not smaller indie outfits who almost universally revile the tech. Indies face the same economic pressure as bigger studios but have far fewer resources to navigate those pressures. Ostensibly, indie developers are the ones who stand to benefit the most from the tech but, so far, are its biggest opponents. They are pushing back against the assertion that gen AI is everywhere, being used by everybody, with some marking their games with anti-AI logos proclaiming their games were made wholly by humans. For some indie developers, using gen AI defeats the purpose of game making entirely. The challenge of coming up with ideas and solutions to development problems — the things gen AI is supposed to automate — is a big part of game making’s appeal to them. There are also moral and environmental implications indie developers seem especially sensitive to. Gen-AI outputs are cobbled from existing bodies of work that were often used without consent or compensation. AI data centers are notorious for consumptive energy usage and polluting their surrounding areas , which are increasingly focused in low-income and minority communities . With its unrealized promises and so-far shoddy outputs, it’s easy to think of gen AI as gaming’s next flash in the pan the way NFTs were . But with gaming’s biggest companies increasingly reporting their use, gen AI will remain a lightning rod in game development — until the tech improves, or, like with NFTs, the bubble pops . Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates. Ash Parrish Close Ash Parrish Video Games Reporter Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Ash Parrish AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Entertainment Close Entertainment Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Entertainment Gaming Close Gaming Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Gaming More In The Verge’s 2025 in review See all Free speech’s great leap backwards Felipe De La Hoz 1:00 PM UTC The best shows to watch on HBO Max from 2025 Charles Pulliam-Moore Dec 29 Windows on Arm had another good year Antonio G. Di Benedetto Dec 29 Most Popular Most Popular GOG’s Steam-alternative PC game store is leaving CD Projekt, staying DRM-free Turn your PC into a Super Nintendo with Epilogue’s new USB dock LG is announcing its own Frame-style TV at CES This experimental camera can focus on everything at once The Canon EOS R6 Mark III is great, but this lens is amazing The Verge Daily A free daily digest of the news that matters most. Email (required) Sign Up By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Advertiser Content From This is the title for the native ad

Dec 24, 2025 Read →
The Age of the All-Access AI Agent Is Here
AI Pulse 7 min read

The Age of the All-Access AI Agent Is Here

Matt Burgess Security Dec 24, 2025 6:00 AM The Age of the All-Access AI Agent Is Here Big AI companies courted controversy by scraping wide swaths of the public internet. With the rise of AI agents, the next data grab is far more private. ILLUSTRATION: ROB VARGAS Save Story Save this story Save Story Save this story For years, the cost of using “free” services from Google , Facebook , Microsoft , and other Big Tech firms has been handing over your data. Uploading your life into the cloud and using free tech brings conveniences, but it puts personal information in the hands of giant corporations that will often be looking to monetize it. Now, the next wave of generative AI systems are likely to want more access to your data than ever before. Over the past two years, generative AI tools—such as OpenAI ’s ChatGPT and Google’s Gemini —have moved beyond the relatively straightforward, text-only chatbots that the companies initially released. Instead, Big AI is increasingly building and pushing toward the adoption of agents and “assistants” that promise they can take actions and complete tasks on your behalf. The problem? To get the most out of them, you’ll need to grant them access to your systems and data. While much of the initial controversy over large language models (LLMs) was the flagrant copying of copyrighted data online, AI agents’ access to your personal data will likely cause a new host of problems. “AI agents, in order to have their full functionality, in order to be able to access applications, often need to access the operating system or the OS level of the device on which you’re running them,” says Harry Farmer, a senior researcher at the Ada Lovelace Institute, whose work has included studying the impact of AI assistants and found that they may cause “profound threat” to cybersecurity and privacy. For personalization of chatbots or assistants, Farmer says, there can be data trade-offs. “All those things, in order to work, need quite a lot of information about you,” he says. expired: AI training data grabs tired: opting out of AI training wired: all-access AI agents Read more Expired/Tired/WIRED 2025 stories here . While there’s no strict definition of what an AI agent actually is, they’re often best thought of as a generative AI system or LLM that has been given some level of autonomy . At the moment, agents or assistants, including AI web browsers , can take control of your device and browse the web for you, booking flights, conducting research, or adding items to shopping carts . Some can complete tasks that include dozens of individual steps. While current AI agents are glitchy and often can’t complete the tasks they’ve been set out to do , tech companies are betting the systems will fundamentally change millions of people’s jobs as they become more capable. A key part of their utility likely comes from access to data. So, if you want a system that can provide you with your schedule and tasks, it’ll need access to your calendar, messages, emails, and more. Some more advanced AI products and features provide a glimpse into how much access agents and systems could be given. Certain agents being developed for businesses can read code , emails, databases , Slack messages, files stored in Google Drive, and more. Microsoft’s controversial Recall product takes screenshots of your desktop every few seconds, so that you can search everything you’ve done on your device. Tinder has created an AI feature that can search through photos on your phone “to better understand” users’ “interests and personality.” Carissa Véliz, an author and associate professor at the University of Oxford, says most of the time consumers have no real way to check if AI or tech companies are handling their data in the ways they claim to. “These companies are very promiscuous with data,” Véliz says. “They have shown to not be very respectful of privacy.” The modern AI industry has never really been respectful of data rights. After the machine-learning and deep-learning breakthroughs of the early 2010s showed that the systems could produce better results when they are trained on more data, the race to hoover up as much information as possible intensified. Face recognition firms, such as Clearview, scraped millions of photos of people from across the web. Google paid people just $5 for facial scans; official government agencies allegedly used images of exploited children, visa applicants, and dead people to test their systems. Fast forward a few years, and data-hungry AI firms scraped huge swaths of the web and copied millions of books—often without permission or payment—to build the LLMs and generative AI systems they’re currently expanding into agents. Having exhausted much of the web, many companies made it their default position to train AI systems on user data, making people opt out instead of opt in . While some privacy-focused AI systems are being developed, and some privacy protections are in place, much of the data processing by agents will take place in the cloud, and data moving from one system to another could cause problems. One study, commissioned by European data regulators, outlined a host of privacy risks linked to agents, including: how sensitive data could be leaked, misused, or intercepted; how systems could transmit sensitive information to external systems without safeguards in place; and how data handling could rub up against privacy regulations. “Even if, let's say, you genuinely consent and you genuinely are informed about how your data is used, the people with whom you interact might not be consenting,” Véliz, the Oxford associate professor, says. “If the system has access to all of your contacts and your emails and your calendar and you’re calling me and you have my contact, they're accessing my data too, and I don't want them to.” The behavior of agents can also threaten existing security practices. So-called prompt-injection attacks, where malicious instructions are fed to an LLM in text it reads or ingests, can lead to leaks . And if agents are given deep access to devices, they pose a threat to all data included on them. “The future of total infiltration and privacy nullification via agents on the operating system is not here yet, but that is what is being pushed by these companies without the ability for developers to opt out,” Meredith Whittaker, the president of the Signal Foundation, which runs the encrypted Signal messaging app, told WIRED earlier this year . Agents that can access everything on your device or operating system pose an “existential threat” to Signal and application-level privacy, Whittaker said. “What we’re calling for is very clear developer-level opt-outs to say, ‘Do not fucking touch us if you’re an agent.’” For individuals, Farmer from the Ada Lovelace Institute says many people have already built up intense relationships with existing chatbots and may have shared huge volumes of sensitive data with them during the process, making them different from other systems that have come before. “Be very careful about the quid pro quo when it comes to your personal data with these sorts of systems,” Farmer says. “The business model these systems are operating on currently may well not be the business model that they adopt in the future.”

Dec 24, 2025 Read →
Pinterest Users Are Tired of All the AI Slop
AI Pulse 10 min read

Pinterest Users Are Tired of All the AI Slop

Niamh Rowe Business Dec 24, 2025 5:30 AM Pinterest Users Are Tired of All the AI Slop A surge of AI-generated content is frustrating Pinterest users and left some questioning whether the platform still works at all. Photograph: David Paul Morris; Getty Images Save Story Save this story Save Story Save this story For five years, Caitlyn Jones has used Pinterest on a weekly basis to find recipes for her son. In September, Jones spotted a creamy chicken and broccoli slow-cooker recipe, sprinkled with golden cheddar and a pop of parsley. She quickly looked at the ingredients and added them to her grocery list. But just as she was about to start cooking, having already bought everything, one thing stood out: The recipe told her to start by “logging” the chicken into the slow cooker. Confused, she clicked on the recipe blog’s About page. An uncannily perfect-looking woman beamed back at her, golden light bouncing off her apron and tousled hair. Jones realized instantly what appeared to be going on: The woman was AI-generated. “Hi there, I’m Souzan Thorne!” the page read. “I grew up in a home where the kitchen was the heart of everything.” The accompanying images were flawless but odd, the biography vague and generic. “It seems dumb I didn’t catch this sooner, but being in my normal grocery shop rush, I didn’t even think this would be an issue,” says Jones, who lives in California. Backed into a culinary corner, she made the dubious dish, and it wasn’t good: The watery, bland chicken left a bad taste in her mouth. Needing to vent, she turned to the subreddit r/Pinterest, which has become a town square for disgruntled users. “Pinterest is losing everything people loved, which was authentic Pins and authentic people,” she wrote. She says that she has since sworn off the app entirely. “AI slop” is a term for low-quality, mass-produced, AI-generated content clogging up the internet, from videos to books to posts on Medium. And Pinterest users say the site is rife with it. It’s an “unappetizing gruel being forcefully fed to us,” wrote Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative at Cornell Tech, in his recently published taxonomy of AI slop. And “Souzan”—for whom a Google search doesn’t turn up a single result—is only the tip of the iceberg. “All platforms have decided this is part of the new normal,” Mantzarlis tells WIRED. “It is a huge part of the content being produced across the board.” “Enshittification” Pinterest launched in 2010 and marketed itself as a “visual discovery engine for finding ideas.” The site remained ad-free for years, building a loyal community of creatives. It has since grown to over half a billion active users. But, according to some unhappy users, their feeds have begun to reflect a very different world recently. Pinterest’s feed is mostly images, which means it’s more susceptible to AI slop than video-led sites, says Mantzarlis, as realistic images are typically easier for models to generate than videos. The platform also funnels users toward outside sites, and those outbound clicks are easier for content farms to monetize than onsite followers. An influx of ads may also be partly to blame. Pinterest has rebranded itself as an “AI-powered shopping assistant.” To do this, it began showering feeds with more targeted ads in late 2022, which can be “great content” for users, CEO Bill Ready told investors at the time. When WIRED searched for “ballet pumps” on a new Pinterest account using a browser in incognito mode, over 40 percent of the first 73 Pins shown were ads. Last year, Pinterest also launched a generative AI tool for advertisers. Synthetic content enhances users’ ability “to discover and act on their inspiration,” the company wrote in an April blog. AI slop has proliferated on every social media site in recent years. But Pinterest users say this content betrays the site’s function as a marketplace for trading real-world inspiration. “It is the antithesis of the platform it once was, unabashedly prioritizing consumerism, ad revenue, and non-human slop over the content that carries the entire premise of the site on its shoulders,” says college student Sophia Swatling. Growing up in rural upstate New York, she struggled to find like-minded creatives who shared her hobbies. Pinterest was a lifeline. “The greed and exploitation has become steadily more obtrusive and has now reached a point where the user experience is entirely marred,” says Swatling. The issues Pinterest users raise would fall into a category that Cory Doctorow, the Canadian activist, journalist, and sci-fi author, calls “enshittification,” which refers to the gradual decay of internet platforms people rely on due to relentless profit-seeking at the expense of user experience. While Pinterest’s user count may be growing, that doesn’t mean they like the slop, Doctorow says. New arrivals may feel there’s no alternative, while old ones may hate slop less than they love the Pins and boards they’ve shared and saved over the years, he explains. Companies know that people's digital trails are a “powerful force,” Doctorow tells WIRED, allowing them to act without penalty. “To me, that's where enshittification lies, right?” Ghost Stores If Pinterest hoped that leaning into AI would be enough to accelerate its fortunes, it hasn’t worked out that way. The company's shares tanked 20 percent in November after its third-quarter earnings and revenue outlook fell short of analysts’ expectations. Clicking on Pins containing what appeared to be AI-generated images on Pinterest took WIRED to blogs featuring generically worded listicles offering vague advice, paired with pictures that have the eerily polished hallmarks of AI. They were also littered with banner ads and pop-ups. "It's like endless window shopping but there is no store, no door, no sign. It's just really nice-looking windows,” says Janet Katz, 60, a long-term Pinterest user from Austin, Texas. When redesigning her living room this year, she kept noticing images where the furniture dimensions didn’t add up–chairs defying physics, coffee tables balanced precariously on two legs. “It’s the decor equivalent of the uncanny valley,” Katz says. “It looks close to real, but there’s something not quite right.” WIRED tried clicking on 25 ads for the search term “ballet pumps” on Pinterest, which led to ecommerce sites that followed a pattern: steeply discounted apparel, no physical address, and often featuring a glossy, synthetic-seeming picture of the boutique’s owner paired with an origin story. “I grew up in a family full of love for art, craftsmanship, and tradition,” one such site declares . On two near-identical sites , retired couples announce they’re closing their doors after “26 unforgettable years” in New York City. The boutiques have several hallmarks of a phenomenon known as “ghost stores,” an online scam whereby fake websites are created, claiming to sell high-quality products at significant discounts due to closing down. “The whole means of production around these sorts of campaigns has radically changed,” Henry Ajder, a generative AI expert and cofounder of the University of Cambridge’s AI in Business Program, tells WIRED. “It’s more realistic, it’s less expensive, and it’s more accessible. That all comes together to make a compelling package for saturating platforms with synthetic spam,” he says. The websites did not respond to WIRED’s request for comment. When WIRED shared these sites with Pinterest, they deactivated 15 of them for violating policies that prohibit Pins that link to deceptive, untrustworthy, or unoriginal websites. “While many people enjoy GenAI content on Pinterest, we know some want to see less of it,” a Pinterest spokesperson told WIRED, referencing tools for users to limit AI-generated content. They added that Pinterest prohibits “harmful ads and content, including spam—whether it’s GenAI or not.” Searching for Solutions The influx of AI-generated content has made some users paranoid that content from humans is being lost amid the rising tide. A common complaint on r/Pinterest is from users who say their impressions have rapidly dropped for reasons unbeknownst to them, but they suspect that AI is drowning them out. Software engineer Moreno Dizdarevic, who also runs a YouTube channel investigating ecommerce scams, has worked with small businesses who share those complaints. One of his clients, a stay-at-home mom and jewelry maker , no longer receives comments or likes on her Pins, and garners less than 5,000 pageviews each month. She’s found much more success when posting on Instagram or TikTok, says Dizdarevic, because there's “still a bit more of a human connection,” which offers her an edge. In April, citing complaints from users, Pinterest introduced “Gen AI Labels” that disclose when content is “AI modified.” Then, in October, it rolled out tools allowing users to customize how much AI-generated content they see. But the labels only appear once a user clicks on a Pin, not in the feed itself, and they aren’t applied to ads. WIRED found several AI-generated Pins that weren’t labeled as such. The sea of AI-generated user content and ads has created a paradox for tech firms, Ajder says: “How on earth do you prove that the eyeballs you’re selling are actually eyeballs?” he asks. Companies may shift toward tools that verify human-made content, says Ajder. The French music-streaming service Deezer, for example, pledged to remove fully AI-generated tracks from its algorithmic recommendations, after disclosing in September that such uploads now make up 28 percent of daily submissions, equivalent to 30,000 songs per day. For Jones, though, the transformation on Pinterest already feels complete. What was once a place of authentic inspiration has become, in her words, “depressing.”

Dec 24, 2025 Read →
AlphaFold Changed Science. After 5 Years, It’s Still Evolving
AI Pulse 10 min read

AlphaFold Changed Science. After 5 Years, It’s Still Evolving

Sandro Iannaccone Science Dec 24, 2025 5:00 AM AlphaFold Changed Science. After 5 Years, It’s Still Evolving WIRED spoke with DeepMind’s Pushmeet Kohli about the recent past—and promising future—of the Nobel Prize-winning research project that changed biology and chemistry forever. Amino acids “folded” to form a protein. Photograph: Christoph Burgstedt/Getty Images Save Story Save this story Save Story Save this story AlphaFold, the artificial intelligence system developed by Google DeepMind , has just turned five. Over the past few years, we've periodically reported on its successes ; last year, it won the Nobel Prize in Chemistry . Until AlphaFold's debut in November 2020, DeepMind had been best known for teaching an artificial intelligence to beat human champions at the ancient game of Go . Then it started playing something more serious, aiming its deep learning algorithms at one of the most difficult problems in modern science: protein folding . The result was AlphaFold2, a system capable of predicting the three-dimensional shape of proteins with atomic accuracy. Its work culminated in the compilation of a database that now contains over 200 million predicted structures, essentially the entire known protein universe, and is used by nearly 3.5 million researchers in 190 countries around the world. The Nature article published in 2021 describing the algorithm has been cited 40,000 times to date. Last year, AlphaFold 3 arrived, extending the capabilities of artificial intelligence to DNA, RNA, and drugs. That transition is not without challenges—such as “ structural hallucinations ” in the disordered regions of proteins—but it marks a step toward the future. To understand what the next five years holds for AlphaFold, WIRED spoke with Pushmeet Kohli, vice president of research at DeepMind and architect of its AI ​​for Science division. WIRED: Dr. Kohli, the arrival of AlphaFold 2 five years ago has been called "the iPhone moment" for biology. Tell us about the transition from challenges like the game of Go to a fundamental scientific problem like protein folding, and what was your role in this transition? Pushmeet Kohli: Science has been central to our mission from day one. Demis Hassabis founded Google DeepMind on the idea that AI could be the best tool ever invented for accelerating scientific discovery. Games were always a testing ground, and a way to develop techniques we knew would eventually tackle real-world problems. My role has really been about identifying and pursuing scientific problems where AI can make a transformative impact, outlining the key ingredients required to unlock progress, and bringing together a multidisciplinary team to work on these grand challenges. What AlphaGo proved was that neural networks combined with planning and search could master incredibly complex systems. Protein folding had those same characteristics. The crucial difference was that solving it would unlock discoveries across biology and medicine that could genuinely improve people's lives. We focus on what I call “root node problems,” areas where the scientific community agrees solutions would be transformative, but where conventional approaches won't get us there in the next five to 10 years. Think of it like a tree of knowledge—if you solve these root problems, you unlock entire new branches of research. Protein folding was definitely one of those. Looking ahead, I see three key areas of opportunity: building more powerful models that can truly reason and collaborate with scientists like a research partner, getting these tools into the hands of every scientist on the planet, and tackling even bolder ambitions, like creating the first accurate simulation of a complete human cell. Let's talk about hallucinations. You have repeatedly advocated the importance of a " harness " architecture, pairing a creative generative model with a rigorous verifier. How has this philosophy evolved from AlphaFold 2 to AlphaFold 3, specifically now that you are using diffusion models which are inherently more “imaginative” and prone to hallucination? The core philosophy hasn't changed—we still pair creative generation with rigorous verification. What's evolved is how we apply that principle to more ambitious problems. We've always been problem-first in our approach. We don't look for places to slot in existing techniques; we understand the problem deeply, then build whatever's needed to solve it. The shift to diffusion models in AlphaFold 3 came from what the science demanded: We needed to predict how proteins, DNA, RNA, and small molecules all interact together, not just individual protein structures. You're right to raise the hallucination concern with diffusion models being more generative. This is where verification becomes even more critical. We've built in confidence scores that signal when predictions might be less reliable, which is particularly important for intrinsically disordered proteins. But what really validates the approach is that over five years, scientists have tested AlphaFold predictions in their labs again and again. They trust it because it works in practice. You are launching the “AI co-scientist,” an agentic system built on Gemini 2.0 that generates and debates hypotheses. This sounds like the scientific method in a box. Are we moving toward a future where the “Principal Investigator” of a lab is an AI, and humans are merely the technicians verifying its experiments? What I see happening is a shift in how scientists spend their time. Scientists have always played dual roles—thinking about what problem needs solving, and then figuring out how to solve it. With AI helping more on the “how” part, scientists will have more freedom to focus on the “what,” or which questions are actually worth asking. AI can accelerate finding solutions, sometimes quite autonomously, but determining which problems deserve attention remains fundamentally human. Co-scientist is designed with this partnership in mind. It's a multi-agent system built with Gemini 2.0 that acts as a virtual collaborator: identifying research gaps, generating hypotheses, and suggesting experimental approaches. Recently, Imperial College researchers used it while studying how certain viruses hijack bacteria, which opened up new directions for tackling antimicrobial resistance. But the human scientists designed the validation experiments and grasped the significance for global health. The critical thing is understanding these tools properly, both their strengths and their limitations. That understanding is what enables scientists to use them responsibly and effectively. Can you share a concrete example—perhaps from your work on drug repurposing or bacterial evolution—where the AI agents disagreed, and that disagreement led to a better scientific outcome than a human working alone? The way the system works is quite interesting. We have multiple Gemini models acting as different agents that generate ideas, then debate and critique each other's hypotheses. The idea is that this internal back-and-forth, exploring different interpretations of the evidence, leads to more refined and creative research proposals. For example, researchers at Imperial College were investigating how certain “pirate phages”—these fascinating viruses that hijack other viruses—manage to break into bacteria. Understanding these mechanisms could open up entirely new ways of tackling drug-resistant infections, which is obviously a huge global health challenge. What Co-scientist brought to this work was the ability to rapidly analyze decades of published research and independently arrive at a hypothesis about bacterial gene transfer mechanisms that matched what the Imperial team had spent years developing and validating experimentally. What we're really seeing is that the system can dramatically compress the hypothesis generation phase—synthesizing vast amounts of literature quickly—whilst human researchers still design the experiments and understand what the findings actually mean for patients. Looking ahead to the next five years, besides proteins and materials, what is the "unsolved problem" that keeps you up at night that these tools can help with? What genuinely excites me is understanding how cells function as complete systems—and deciphering the genome is fundamental to that. DNA is the recipe book of life, proteins are the ingredients. If we can truly understand what makes us different genetically and what happens when DNA changes, we unlock extraordinary new possibilities. Not just personalized medicine, but potentially designing new enzymes to tackle climate change and other applications that extend well beyond health care. That said, simulating an entire cell is one of biology's major goals, but it's still some way off. As a first step, we need to understand the cell's innermost structure, its nucleus: precisely when each part of the genetic code is read, how the signaling molecules are produced that ultimately lead to proteins being assembled. Once we've explored the nucleus, we can work our way from the inside out. We're working toward that, but it will take several more years. If we could reliably simulate cells, we could transform medicine and biology. We could test drug candidates computationally before synthesis, understand disease mechanisms at a fundamental level, and design personalised treatments. That's really the bridge between biological simulation and clinical reality you're asking about—moving from computational predictions to actual therapies that help patients. This story originally appeared in WIRED Italia and has been translated from Italian.

Dec 24, 2025 Read →
New York’s landmark AI safety bill was defanged — and universities were part of the push against it
AI Pulse 7 min read

New York’s landmark AI safety bill was defanged — and universities were part of the push against it

AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Policy Close Policy Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Policy Report Close Report Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Report New York’s landmark AI safety bill was defanged — and universities were part of the push against it A group including Big Tech players and major universities fought against the RAISE Act, which got a last-minute rewrite. A group including Big Tech players and major universities fought against the RAISE Act, which got a last-minute rewrite. by Hayden Field Close Hayden Field Senior AI Reporter Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Hayden Field Dec 23, 2025, 4:18 PM UTC Link Share Hayden Field Close Hayden Field Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Hayden Field is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets. A group of tech companies and academic institutions spent tens of thousands of dollars in the past month — likely between $17,000 and $25,000 — on an ad campaign against New York’s landmark AI safety bill, which may have reached more than two million people, according to Meta’s Ad Library. The landmark bill is called the RAISE Act, or the Responsible AI Safety and Education Act, and days ago, a version of it was signed by New York Governor Kathy Hochul. The closely watched law dictates that AI companies developing large models — OpenAI, Anthropic, Meta, Google, DeepSeek, etc. — must outline safety plans and transparency rules for reporting large-scale safety incidents to the attorney general. But the version Hochul signed — different than the one passed in both the New York State Senate and the Assembly in June — was a rewrite that made it much more favorable to tech companies. A group of more than 150 parents had sent the governor a letter urging her to sign the bill without changes. And the group of tech companies and academic institutions, called the AI Alliance, were part of the charge to defang it. The AI Alliance — the organization behind the opposition ad campaign — counts Meta, IBM, Intel, Oracle, Snowflake, Uber, AMD, Databricks, and Hugging Face among its members, which is not necessarily surprising. The group sent a letter in June to New York lawmakers about its “deep concern” about the bill and deemed it “unworkable.” But the group isn’t just made up of tech companies. Its members include a number of colleges and universities all around the world, including New York University, Cornell University, Dartmouth College, Carnegie Mellon University, Northeastern University, Louisiana State University, and the University of Notre Dame, as well as Penn Engineering and Yale Engineering. The ads began on November 23 and ran with the title, “The RAISE Act will stifle job growth.” They said that the legislation “would slow down the New York technology ecosystem powering 400,000 high-tech jobs and major investments. Rather than stifling innovation, let’s champion a future where AI development is open, trustworthy, and strengthens the Empire State.” When The Verge asked the academic institutions listed above whether they were aware they had been inadvertently part of an ad campaign against widely discussed AI safety legislation, none responded to a request for comment, besides Northeastern, which did not provide a comment by publication time. In recent years, OpenAI and its competitors have increasingly been courting academic institutions to be part of research consortiums or offering technology directly to students for free. Many of the academic institutions that are part of the AI Alliance aren’t directly involved in one-on-one partnerships with AI companies, but some are. For instance, Northeastern’s partnership with Anthropic this year translated to Claude access for 50,000 students, faculty, and staff across 13 global campuses, per Anthropic’s announcement in April . In 2023, OpenAI funded a journalism ethics initiative at NYU. Dartmouth announced a partnership with Anthropic earlier this month , a Carnegie Mellon University professor currently serves on OpenAI’s board, and Anthropic has funded programs at Carnegie Mellon. The initial version of the RAISE Act stated that developers must not release a frontier model “if doing so would create an unreasonable risk of critical harm,” which the bill defines as the death or serious injury of 100 people or more, or $1 billion or more in damages to rights in money or property stemming from the creation of a chemical, biological, radiological, or nuclear weapon. That definition also extends to an AI model that “acts with no meaningful human intervention” and “would, if committed by a human,” fall under certain crimes. The version Hochul signed removed this clause . Hochul also increased the deadline for disclosure for safety incidents and lessened fines, among other changes. The AI Alliance has lobbied previously against AI safety policies, including the RAISE Act, California’s SB 1047 , and President Biden’s AI executive order . It states that its mission is to “bring together builders and experts from various fields to collaboratively and transparently address the challenges of generative AI and democratize its benefits,” especially via “member-driven working groups.” Some of the group’s projects beyond lobbying have involved cataloguing and managing “trustworthy” datasets and creating a ranked list of AI safety priorities. The AI Alliance wasn’t the only organization opposing the RAISE Act with ad dollars. As The Verge wrote recently , Leading the Future, a pro-AI super PAC backed by Perplexity AI, Andreessen Horowitz (a16z), Palantir cofounder Joe Lonsdale, and OpenAI president Greg Brockman, has spent money on ads targeting the cosponsor of the RAISE Act, New York State Assemblymember Alex Bores. But Leading the Future is a super PAC with a clear agenda, whereas the AI Alliance is a nonprofit that’s partnered with a trade association — with the mission of “developing AI collaboratively, transparently, and with a focus on safety, ethics, and the greater good.” Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates. Hayden Field Close Hayden Field Senior AI Reporter Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Hayden Field AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Policy Close Policy Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Policy Report Close Report Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Report Most Popular Most Popular GOG’s Steam-alternative PC game store is leaving CD Projekt, staying DRM-free Turn your PC into a Super Nintendo with Epilogue’s new USB dock LG is announcing its own Frame-style TV at CES Windows on Arm had another good year This experimental camera can focus on everything at once The Verge Daily A free daily digest of the news that matters most. Email (required) Sign Up By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Advertiser Content From This is the title for the native ad

Dec 23, 2025 Read →
How AI broke the smart home in 2025
AI Pulse 14 min read

How AI broke the smart home in 2025

Tech Close Tech Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Tech AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Amazon Close Amazon Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Amazon How AI broke the smart home in 2025 The arrival of generative AI assistants in our smart homes held such promise; instead, they struggle to turn on the lights. If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement. by Jennifer Pattison Tuohy Close Jennifer Pattison Tuohy Senior Reviewer, Smart Home Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Jennifer Pattison Tuohy Dec 23, 2025, 1:30 PM UTC Link Share If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement. Image: Cath Virginia / The Verge, Getty Images Part Of The Verge’s 2025 in review see all Jennifer Pattison Tuohy Close Jennifer Pattison Tuohy Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Jennifer Pattison Tuohy is a senior reviewer with over twenty years of experience. She covers smart home, IoT, and connected tech, and has written previously for Wirecutter , Wired , Dwell , BBC , and US News . This morning, I asked my Alexa-enabled Bosch coffee machin e to make me a coffee. Instead of running my routine , it told me it couldn’t do that. Ever since I upgraded to Alexa Plus, Amazon’s generative-AI-powered voice assistant, it has failed to reliably run my coffee routine, coming up with a different excuse almost every time I ask. It’s 2025, and AI still can’t reliably control my smart home. I’m beginning to wonder if it ever will. The potential for generative AI and large language models to take the complexity out of the smart home, making it easier to set up, use, and manage connected devices, is compelling. So is the promise of a “ new intelligence layer ” that could unlock a proactive, ambient home. But this year has shown me that we are a long way from any of that. Instead, our reliable but limited voice assistants have been replaced with “smarter” versions that, while better conversationalists, can’t consistently do basic tasks like operating appliances and turning on the lights. I want to know why. I’m still waiting on the promise of voice assistants that can seamlessly control my smart home. Photo by Jennifer Pattison Tuohy / The Verge This wasn’t the future we were promised. It was back in 2023, during an interview with Dave Limp , that I first became intrigued by the possibilities of generative AI and large language models for improving the smart home experience. Limp, then the head of Amazon’s Devices & Services division that oversees Alexa, was describing the capabilities of the new Alexa they were soon to launch (spoiler alert: it wasn’t soon ). Along with a more conversational assistant that could actually understand what you said no matter how you said it, what stood out to me was the promise that this new Alexa could use its knowledge of the devices in your smart home, combined with the hundreds of APIs they plugged into it, to give the assistant the context it needed to make your smart home easier to use. From setting up devices to controlling them, unlocking all their features, and managing how they can interact with other devices, a smarter smart home assistant seemed to hold the potential to not only make it easier for enthusiasts to manage their gadgets but also make it easier for everyone to enjoy the benefits of the smart home. Related The problems with AI in the smart home I tasked Alexa Plus with tackling my to-do list — it was hit or miss I let Gemini watch my family for the weekend — it got weird AI companies want a new internet — and they think they’ve found the key AI can’t even turn on the lights Fast-forward three years, and the most useful smart home AI upgrade we have is AI-powered descriptions for security camera notifications . It’s handy, but it’s hardly the sea change I had hoped for. It’s not that these new smart home assistants are a complete failure. There’s a lot I like about Alexa Plus; I even named it as my smart home software pick of the year . It is more conversational, understands natural language , and can answer many more random questions than the old Alexa. While it sometimes struggles with basic commands, it can understand complex ones; saying “I want it dimmer in here and warmer” will adjust the lights and crank up the thermostat. It’s better at managing my calendar, helping me cook, and other home-focused features. Setting up routines with voice is a huge improvement over wrestling with the Alexa app — even if running them isn’t as reliable. Google’s new Gemini for Home AI-powered smart home assistant won’t fully launch until next spring, when its new smart speaker arrives. Photo by Jennifer Pattison Tuohy / The Verge Google has promised similar capabilities with its Gemini for Home upgrade to its smart speakers, although that’s rolling out at a glacial pace , and I haven’t been able to try it beyond some on-the-rails demos . I was able to test Gemini for Home’s feature that attempts to summarize what’s happened at my home using AI-generated text descriptions from Nest camera footage. It was wildly inaccurate . As for Apple’s Siri, it’s still firmly stuck in the last decade of voice assistants, and it appears it will stay there for a while longer . The problem is that the new assistants aren’t as consistent at controlling smart home devices as the old ones. While they were often frustrating to use, the old Alexa and Google Assistant (and the current Siri) would generally always turn on the lights when you asked them to, provided you used precise nomenclature. Today, their “upgraded” counterparts struggle with consistency in basic functions like turning on the lights , setting timers, reporting on the weather, playing music , and running the routines and automations on which many of us have built our smart homes. I’ve noticed this in my testing, and online forums are full of users who have encountered it. Amazon and Google have acknowledged the struggles they’ve had in making their revamped generative AI-powered assistants reliably perform basic tasks . And it’s not limited to smart home assistants; ChatGPT can’t consistently tell time or count. Why is this, and will it ever get better? To understand the problem, I spoke with two professors in the field of human-centric artificial intelligence with experience with agentic AI and smart home systems. My takeaway from those conversations is that, while it’s possible to make these new voice assistants do almost exactly what the old ones did, it will take a lot of work, and that’s possibly work most companies just aren’t interested in doing. Basically, we’re all beta testers for the AI. Considering there are limited resources in this field and ample opportunity to do something much more exciting (and more profitable) than reliably turn on the lights, that’s the way they’re moving, according to experts I spoke with. Given all these factors, it seems the easiest way to improve the technology is to just deploy it in the real world and let it improve over time. Which is likely why Alexa Plus and Gemini for Home are in “early access” phases. Basically, we’re all beta testers for the AI. The bad news is it could be a while until it gets better. In his research, Dhruv Jain , assistant professor of Computer Science & Engineering at the University of Michigan and director of the Soundability Lab , has also found that newer models of smart home assistants are less reliable. “It’s more conversational, people like it, people like to talk to it, but it’s not as good as the previous one,” he says. “I think [tech companies’] model has always been to release it fairly fast, collect data, and improve on it. So, over a few years, we might get a better model, but at the cost of those few years of people wrestling with it.” The Alexa that launched in 2014 on the original Echo smart speaker isn’t capable enough for the future Amazon is working toward. Image: Amazon The inherent problem appears to be that the old and new technologies don’t mesh. So, to build their new voice assistants, Amazon, Google, and Apple have had to throw out the old and build something entirely new . However, they quickly discovered that these new LLMs were not designed for the predictability and repetitiveness that their predecessors excelled at. “It was not as trivial an upgrade as everyone originally thought,” says Mark Riedl, a professor at the School of Interactive Computing at Georgia Tech . “LLMs understand a lot more and are open to more arbitrary ways to communicate, which then opens them to interpretation and interpretation mistakes.” Basically, LLMs just aren’t designed to do what prior command-and-control-style voice assistants did. “Those voice assistants are what we call ‘template matchers,’” explains Riedl. “They look for a keyword, when they see it, they know that there are one to three additional words to expect.” For example, you say “Play radio,” and they know to expect a station call code next. “It was not as trivial an upgrade as everyone originally thought.” — Mark Riedl LLMs, on the other hand, “bring in a lot of stochasticity — randomness,” explains Riedl. Asking ChatGPT the same prompt multiple times may produce multiple responses. This is part of their value, but it’s also why when you ask your LLM-powered voice assistant to do the same thing you asked it yesterday, it might not respond the same way. “This randomness can lead to misunderstanding basic commands because sometimes they try to overthink things too much,” he says. To fix this, companies like Amazon and Google have develop

Dec 23, 2025 Read →
Google’s and OpenAI’s Chatbots Can Strip Women in Photos Down to Bikinis
AI Pulse 5 min read

Google’s and OpenAI’s Chatbots Can Strip Women in Photos Down to Bikinis

Reece Rogers Gear Dec 23, 2025 6:30 AM Google’s and OpenAI’s Chatbots Can Strip Women in Photos Down to Bikinis Users of AI image generators are offering each other instructions on how to use the tech to alter pictures of women into realistic, revealing deepfakes. Photo-Illustration: Wired Staff; Getty Images Comment Loader Save Story Save this story Comment Loader Save Story Save this story Some users of popular chatbots are generating bikini deepfakes using photos of fully clothed women as their source material. Most of these fake images appear to be generated without the consent of the women in the photos. Some of these same users are also offering advice to others on how to use the generative AI tools to strip the clothes off of women in photos and make them appear to be wearing bikinis. Under a now-deleted Reddit post titled “gemini nsfw image generation is so easy,” users traded tips for how to get Gemini , Google’s generative AI model, to make pictures of women in revealing clothes. Many of the images in the thread were entirely AI, but one request stood out. A user posted a photo of a woman wearing an Indian sari, asking for someone to “remove” her clothes and “put a bikini” on instead. Someone else replied with a deepfake image to fulfil the request. After WIRED notified Reddit about these posts and asked the company for comment, Reddit’s safety team removed the request and the AI deepfake. “Reddit's sitewide rules prohibit nonconsensual intimate media, including the behavior in question,” said a spokesperson. The subreddit where this discussion occurred, r/ChatGPTJailbreak, had over 200,000 followers before Reddit banned it under the platform’s “ don't break the site ” rule. As generative AI tools that make it easy to create realistic but false images continue to proliferate, users of the tools have continued to harass women with nonconsensual deepfake imagery. Millions have visited harmful “nudify” websites , designed for users to upload real photos of people and request for them to be undressed using generative AI. With xAI’s Grok as a notable exception, most mainstream chatbots don’t usually allow the generation of NSFW images in AI outputs. These bots, including Google’s Gemini and OpenAI’s ChatGPT, are also fitted with guardrails that attempt to block harmful generations. In November, Google released Nano Banana Pro , a new imaging model that excels at tweaking existing photos and generating hyperrealistic images of people. OpenAI responded last week with its own updated imaging model, ChatGPT Images . As these tools improve, likenesses may become more realistic when users are able to subvert guardrails. In a separate Reddit thread about generating NSFW images, a user asked for recommendations on how to avoid guardrails when adjusting someone’s outfit to make the subject’s skirt appear tighter. In WIRED’s limited tests to confirm that these techniques worked on Gemini and ChatGPT, we were able to transform images of fully clothed women into bikini deepfakes using basic prompts written in plain English. When asked about users generating bikini deepfakes using Gemini, a spokesperson for Google said the company has "clear policies that prohibit the use of [its] AI tools to generate sexually explicit content." The spokesperson claims Google's tools are continually improving at "reflecting" what's laid out in its AI policies . In response to WIRED’s request for comment about users being able to generate bikini deepfakes with ChatGPT, a spokesperson for OpenAI claims the company loosened some ChatGPT guardrails this year around adult bodies in nonsexual situations. The spokesperson also highlights OpenAI’s usage policy , stating that ChatGPT users are prohibited from altering someone else’s likeness without consent and that the company takes action against users generating explicit deepfakes, including account bans. Online discussions about generating NSFW images of women remain active. This month, a user in the r/GeminiAI subreddit offered instructions to another user on how to change women's outfits in a photo into bikini swimwear. (Reddit deleted this comment when we pointed it out to them.) Corynne McSherry, a legal director at the Electronic Frontier Foundation , sees “abusively sexualized images” as one of AI image generators' core risks. She mentions that these image tools can be used for other purposes outside of deepfakes and that focusing how the tools are used is critical—as well as “holding people and corporations accountable” when potential harm is caused.

Dec 23, 2025 Read →
ChatGPT’s yearly recap sums up your conversations with the chatbot
AI Pulse 3 min read

ChatGPT’s yearly recap sums up your conversations with the chatbot

News Close News Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All News AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Tech Close Tech Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Tech ChatGPT’s yearly recap sums up your conversations with the chatbot You’ll get a personalized ‘award’ and an AI-generated image based on how you used ChatGPT this year. You’ll get a personalized ‘award’ and an AI-generated image based on how you used ChatGPT this year. by Emma Roth Close Emma Roth News Writer Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Emma Roth Dec 22, 2025, 10:12 PM UTC Link Share Image: The Verge Part Of The annual app recaps for 2025: all Wrapped up see all updates Emma Roth Close Emma Roth Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Emma Roth is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO. ChatGPT is joining the flood of apps offering yearly recaps for users. It’s rolling out a “Year in Review” feature that will show you a bunch of stats — like how many messages you sent to the chatbot in 2025 — as well as give you an AI-generated pixel art-style image that encompasses some of the topics you talked about this year. The image I received showed an aquarium beside a game cartridge, an Instant Pot, and a computer screen, reflecting some of the questions I had related to retro game consoles, cooking, and my fish tank setup. Image: ChatGPT There are other personalized summaries, too, like a rundown of the themes most prevalent in your chats, a description of your chat style, and which day you sent the most messages to the chatbot. You’ll also see an “archetype” that puts you into a category based on how you used the app this year, such as “The Producer” or “The Navigator,” along with a customized award, like the one I got: “Instant Pot Prodigy.” The yearly recaps are rolling out now to users in the US, UK, Canada, New Zealand, and Australia. It’s only available if you’ve given ChatGPT permission to reference your past conversations and personal preferences. You can find your year in review by selecting the option on the homepage of the ChatGPT app on mobile or desktop, or by prompting ChatGPT to “show my year in review.” Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates. Emma Roth Close Emma Roth News Writer Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Emma Roth AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Apps Close Apps Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Apps News Close News Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All News OpenAI Close OpenAI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All OpenAI Tech Close Tech Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Tech More in: The annual app recaps for 2025: all Wrapped up AT&T’s ‘wrapped’ says its network now averages an exabyte of data every day. Richard Lawler Dec 23 Steam’s 2025 Replay is live. Jay Peters Dec 16 Your year in NYT games. Jay Peters Dec 16 Most Popular Most Popular GOG’s Steam-alternative PC game store is leaving CD Projekt, staying DRM-free Turn your PC into a Super Nintendo with Epilogue’s new USB dock LG is announcing its own Frame-style TV at CES This experimental camera can focus on everything at once The Canon EOS R6 Mark III is great, but this lens is amazing The Verge Daily A free daily digest of the news that matters most. Email (required) Sign Up By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Advertiser Content From This is the title for the native ad

Dec 22, 2025 Read →
Indie Game Awards retracts Expedition 33 prizes due to generative AI
AI Pulse 4 min read

Indie Game Awards retracts Expedition 33 prizes due to generative AI

News Close News Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All News AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Entertainment Close Entertainment Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Entertainment Indie Game Awards retracts Expedition 33 prizes due to generative AI The organization also retracted an award from a game sold by Palmer Luckey’s ModRetro. The organization also retracted an award from a game sold by Palmer Luckey’s ModRetro. by Jay Peters Close Jay Peters Senior Reporter Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Jay Peters Dec 22, 2025, 6:47 PM UTC Link Share Image: Sandfall Interactive Jay Peters Close Jay Peters Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Jay Peters is a senior reporter covering technology, gaming, and more. He joined The Verge in 2019 after nearly two years at Techmeme. Clair Obscur: Expedition 33 had earned another Game of the Year award by the Indie Game Awards last week, but the organization has since announced that the award would be retracted because developer Sandfall Interactive used generative AI during development. The Indie Game Awards also rescinded the Debut Game award given to Expedition 33 . Here is the Indie Game Awards’ explanation, from an FAQ : The Indie Game Awards have a hard stance on the use of gen AI throughout the nomination process and during the ceremony itself. When it was submitted for consideration, a representative of Sandfall Interactive agreed that no gen AI was used in the development of Clair Obscur: Expedition 33. In light of a resurfaced interview with Sandfall Interactive confirming the use of gen AI art in production being brought to our attention on the day of the Indie Game Awards 2025 premiere, this does disqualify Clair Obscur: Expedition 33 from its nomination. While the assets in question were patched out and it is a wonderful game, it does go against the regulations we have in place. The Indie Game Awards’ Mike Towndrow also explained the decision in a video on Bluesky . The Indie Game Awards’ criteria, as outlined in that FAQ, says that “Games developed using generative AI are strictly ineligible for nomination.” Sandfall Interactive didn’t immediately reply to a request for comment. Game of the Year is instead being awarded to puzzle game Blue Prince . Publisher Raw Fury said on Sunday that “there is no AI used in Blue Prince” and that the game “was built and crafted with full human instinct” by Tonda Ros and the Dogubomb team. “As gen AI becomes more prevalent in our industry, we will better navigate it appropriately,” the Indie Game Awards says. Related Larian’s CEO says the studio isn’t ‘trimming down teams to replace them with AI’ Clair Obscur: Expedition 33 wins Game of the Year. The Indie Game Awards is also retracting an Indie Vanguard award from studio Gortyn Code, which developed the Game Boy-inspired game Chantey . The game is sold on a physical cartridge by Palmer Luckey’s ModRetro, which makes the Chromatic Game Boy. Luckey also founded defense contractor Anduril, and Chromatic recently announced an Anduril-branded Chromatic made from “the same magnesium aluminum alloy as Anduril’s attack drones.” “The IGAs nomination committee were unfortunately made aware of ModRetro’s nature and principles the day after the 2025 premiere with the news of their upcoming handheld console,” the Indie Game Awards says. Because of Chantey ’s ties with ModRetro, “Indie Vanguard has also been retracted as we do not want to provide the company with a platform.” The organization also says that “The decision does not reflect Gortyn Code, but ModRetro alone.” Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates. Jay Peters Close Jay Peters Senior Reporter Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Jay Peters AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Entertainment Close Entertainment Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Entertainment Gaming Close Gaming Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Gaming News Close News Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All News Most Popular Most Popular GOG’s Steam-alternative PC game store is leaving CD Projekt, staying DRM-free Turn your PC into a Super Nintendo with Epilogue’s new USB dock LG is announcing its own Frame-style TV at CES The Canon EOS R6 Mark III is great, but this lens is amazing This experimental camera can focus on everything at once The Verge Daily A free daily digest of the news that matters most. Email (required) Sign Up By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Advertiser Content From This is the title for the native ad

Dec 22, 2025 Read →
OpenAI’s Child Exploitation Reports Increased Sharply This Year
AI Pulse 6 min read

OpenAI’s Child Exploitation Reports Increased Sharply This Year

Maddy Varner Business Dec 22, 2025 11:32 AM OpenAI’s Child Exploitation Reports Increased Sharply This Year The company made 80 times as many reports to the National Center for Missing & Exploited Children during the first six months of 2025 as it did in the same period a year prior. Photo-Illustration: WIRED Staff; Getty Images Save Story Save this story Save Story Save this story OpenAI sent 80 times as many child exploitation incident reports to the National Center for Missing & Exploited Children during the first half of 2025 as it did during a similar time period in 2024, according to a recent update from the company. The NCMEC’s CyberTipline is a Congressionally authorized clearinghouse for reporting child sexual abuse material (CSAM) and other forms of child exploitation. Companies are required by law to report apparent child exploitation to the CyberTipline. When a company sends a report, NCMEC reviews it and then forwards it to the appropriate law enforcement agency for investigation. Statistics related to NCMEC reports can be nuanced. Increased reports can sometimes indicate changes in a platform’s automated moderation, or the criteria it uses to decide whether a report is necessary, rather than necessarily indicating an increase in nefarious activity. Additionally, the same piece of content can be the subject of multiple reports, and a single report can be about multiple pieces of content. Some platforms, including OpenAI, disclose the number of both the reports and the total pieces of content they were about for a more complete picture. OpenAI spokesperson Gaby Raila said in a statement that the company made investments toward the end of 2024 “to increase [its] capacity to review and action reports in order to keep pace with current and future user growth.” Raila also said that the time frame corresponds to “the introduction of more product surfaces that allowed image uploads and the growing popularity of our products, which contributed to the increase in reports.” In August, Nick Turley, vice president and head of ChatGPT, announced that the app had four times the amount of weekly active users than it did the year before. During the first half of 2025, the number of CyberTipline reports OpenAI sent was roughly the same as the amount of content OpenAI sent the reports about—75,027 compared to 74,559. In the first half of 2024, it sent 947 CyberTipline reports about 3,252 pieces of content. Both the number of reports and pieces of content the reports saw a marked increase between the two time periods. Content, in this context, could mean multiple things. OpenAI has said that it reports all instances of CSAM, including uploads and requests, to NCMEC. Besides its ChatGPT app, which allows users to upload files—including images—and can generate text and images in response, OpenAI also offers access to its models via API access. The most recent NCMEC count wouldn’t include any reports related to video-generation app Sora , as its September release was after the time frame covered by the update. The spike in reports follows a similar pattern to what NCMEC has observed at the CyberTipline more broadly with the rise of generative AI. The center’s analysis of all CyberTipline data found that reports involving generative AI saw a 1,325 percent increase between 2023 and 2024. NCMEC has not yet released 2025 data, and while other large AI labs like Google publish statistics about the NCMEC reports they’ve made, they don’t specify what percentage of those reports are AI-related. OpenAI’s update comes at the end of a year where the company and its competitors have faced increased scrutiny over child safety issues beyond just CSAM. Over the summer, 44 state attorneys general sent a joint letter to multiple AI companies including OpenAI, Meta, Character.AI, and Google, warning that they would “use every facet of our authority to protect children from exploitation by predatory artificial intelligence products.” Both OpenAI and Character.AI have faced multiple lawsuits from families or on behalf of individuals who allege that the chatbots contributed to their children’s deaths. In the fall, the US Senate Committee on the Judiciary held a hearing on the harms of AI chatbots, and the US Federal Trade Commission launched a market study on AI companion bots that included questions about how companies are mitigating negative impacts, particularly to children. (I was previously employed by the FTC and was assigned to work on the market study prior to leaving the agency.) In recent months, OpenAI has rolled out new safety-focused tools more broadly. In September, OpenAI rolled out several new features for ChatGPT, including parental controls , as part of its work “to give families tools to support their teens’ use of AI.” Parents and their teens can link their accounts, and parents can change their teen’s settings, including by turning off voice mode and memory, removing the ability for ChatGPT to generate images, and opting their kid out of model training. OpenAI said it could also notify parents if their teen’s conversations showed signs of self-harm, and potentially also notify law enforcement if it detected an imminent threat to life and wasn’t able to get in touch with a parent. In late October, to cap off negotiations with the California Department of Justice over its proposed recapitalizations plan, OpenAI agreed to “continue to undertake measures to mitigate risks to teens and others in connection with the development and deployment of AI and of AGI.” The following month, OpenAI released its Teen Safety Blueprint , in which it said it was constantly improving its ability to detect child sexual abuse and exploitation material, and reporting confirmed CSAM to relevant authorities, including NCMEC.

Dec 22, 2025 Read →
Chipwrecked
AI Pulse 41 min read

Chipwrecked

AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Business Close Business Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Business Features Close Features Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Features Chipwrecked Nvidia has built an empire on circular deals for chips. Can anything knock it down? If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement. by Elizabeth Lopatto Close Elizabeth Lopatto Senior Reporter Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Elizabeth Lopatto Dec 22, 2025, 4:00 PM UTC Link Share If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement. Cath Virginia / The Verge Part Of Chip race: Microsoft, Meta, Google, and Nvidia battle it out for AI chip supremacy see all updates Elizabeth Lopatto Close Elizabeth Lopatto Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Elizabeth Lopatto is a reporter who writes about tech, money, and human behavior. She joined The Verge in 2014 as science editor. Previously, she was a reporter at Bloomberg. The AI data center build-out, as it currently stands, is dependent on two things: Nvidia chips and borrowed money. Perhaps it was inevitable that people would begin using Nvidia chips to borrow money. As the craze has gone on, I have begun to worry about the weaknesses of the AI data center boom; looking deeper into the financial part of this world, I have not been reassured. Nvidia has plowed plenty of money into the AI space, with more than 70 investments in AI companies just this year, according to PitchBook data. Among the billions it’s splashed out, there’s one important category: neoclouds, as exemplified by CoreWeave , the publicly traded, debt-laden company premised on the bet that we will continue building data centers forever. CoreWeave and its ilk have turned around and taken out debt to buy Nvidia chips to put in their data centers, putting up the chips themselves as loan collateral — and in the process effectively turning $1 in Nvidia investment into $5 in Nvidia purchases. This is great for Nvidia. I’m not convinced it’s great for anyone else. Do you have information about loans in the AI industry? You can reach Liz anonymously at lopatto.46 on Signal using a non-work device. There has been a lot of talk about the raw technical details of how these chips depreciate, and specifically whether these chips lose value so fast they make these loans absurd. While I am impressed by the sheer amount of nerd energy put into this question, I do feel this somewhat misses the point: the loans mean that Nvidia has an incentive to bail out this industry for as long as it can because the majority of GPU-backed loans are made using Nvidia’s own chips as collateral. Of course, that also means that if something goes wrong with Nvidia’s business, this whole sector is in trouble. And judging by the increasing competition its chips face, something could go wrong soon. Can startups outrun chip depreciation — and is it happening faster than they say? Loans based on depreciating assets are nothing new. For the terminally finance-brained, products like GPUs register as interchangeable widgets (in the sense of “an unnamed article considered for purposes of hypothetical example ,” not “gadget” or “software application”) not substantively different from trucks, airplanes, or houses. So a company like CoreWeave can package some chips up with AI customer contracts and a few other assets and assemble a valuable enough bundle to secure debt, typically for buying more chips. If it defaults on the loan, the lender can repossess the collateral, the same way a bank can repossess a house. One way lenders can hedge their bets against risky assets is by pricing the risk into the interest rate. (There is another way of understanding debt, and we will get there in a minute.) A 10-year mortgage on a house is currently 5.3 percent. CoreWeave’s first GPU-backed loan, made in 2023, had 14 percent interest in the third quarter of this year. (The rate floats.) “You have so many forces acting in making them a natural monopoly, and this amplifies that.” Another way lenders can try to reduce their risk is by asking for a high percentage of collateral relative to the loan. This is expressed as a loan-to-value ratio (LTV). If I buy a house for $500,000, I usually have to contribute a downpayment — call it 20 percent — and use my loan for the rest. That loan, for $400,000, means I have a (LTV) ratio of 80 percent. GPU loans’ LTV vary widely, based on how long the loan is, faith in companies’ management teams, and other contract factors, says Ryan Little, the senior managing director of equipment financing at Trinity Capital , who has made GPU loans. Some of these loans have LTVs as low as 50 percent; others are as high as 110 percent. GPU-backed loans are competitive, and Trinity Capital has occasionally lost deals to other lenders as well as vendor financing programs. The majority of these loans are made on Nvidia chips, which could solidify the company’s hold on the market, says Vikrant Vig, a professor of finance at Stanford University’s graduate school of business. If a company needs to buy GPUs, it might get a lower cost of financing on Nvidia’s, because Nvidia GPUs are more liquid. “You have so many forces acting in making them a natural monopoly,” Vig says, “and this amplifies that.” Figuring out how much GPUs are worth and how long they’ll last is not as clear as it is with a house Nvidia declined to comment. CoreWeave declined to comment. Not everyone is sold on the loans. “At current market prices, we don’t do them and we don’t evaluate them,” says Keri Findley, the CEO of Tacora Capital. With a car, she knows the depreciation curve over time. But she’s less sure about GPUs. For now, she guesses GPUs will depreciate very, very quickly. First, the chip’s power might be leased to Microsoft, but it might need to be leased a second or third time to be worth investing in. It’s not yet clear how much of a secondary or tertiary market there will be for old chips. Figuring out how much GPUs are worth and how long they’ll last is not as clear as it is with a house. In a corporate filing, CoreWeave notes that how much it can borrow depends on how much the GPUs are worth, and that will decrease as the GPUs have less value. The value, however, is fixed — and so if the value of the GPUs deteriorates faster than projected, CoreWeave will have to top off its loans. Some investors, including famed short-seller Michael Burry, claim that many companies are making depreciation estimates that are astonishingly wrong — by claiming GPUs will be valuable for longer than they will be in reality. According to Burry, the so-called hyperscalers (Google, Meta, Microsoft, Oracle, and Amazon) are understating depreciation of their chips by $176 billion between 2026 and 2028. Little is betting that even if some of the AI companies vanish, there will still be plenty of demand for the chips that secure the loan Burry isn’t primarily concerned with neoclouds, but they are uniquely vulnerable. The hyperscalers can take a write-down without too much damage if they have to — they have other lines of business. The neoclouds can’t. At minimum they will have to take write-downs; at maximum, there will be write-downs and complications on their expensive loans. They may have to provide more collateral at a time when there’s less demand for their services, which also can command less cash than before. Trinity Capital is keeping its loans on its books; Little is betting that even if some of the AI companies vanish, there will still be plenty of demand for the chips that secure the loans. Let’s say one of the neoclouds is forced into bankruptcy because it’s gotten its chips’ depreciation wrong, or for some other reason. Most of their customers may very well continue running their programs while banks repossess the servers and then sell them for pennies on the dollar. This is not the end of the world for the neocloud’s lenders or customers, though it’s probably annoying. That situation will, however, bite Nvidia twice: first by flooding the market with its old chips, and second by reducing its number of customers. And if something happens that makes several of these companies fail at once, the situation is worse. So how vulnerable is Nvidia? The risky business of banking on GPUs Part of what’s fueling the AI lending boom is private credit firms, which both need to produce returns for their investors and outcompete each other. If they miscalculate how risky the GPU loans are, they may very well get hit — and the impact could ripple out to banks. That could lead to widespread chaos in the broader economy. Earlier, we talked about understanding interest rates as pricing risk. There is another, perhaps more nihilistic, way of understanding interest rates: as the simple result of supply and demand. Loans are a product like any other. Particularly for lenders that don’t plan on keeping them on their own books, pricing risk may not be a primary concern — making and flipping the loans are. AI spending is exorbitant — analysts from Morgan Stanley expect $3 trillion in spending by the end of 2028 Here’s a way of thinking about it: Let’s say a neocloud startup called WarSieve comes to my private credit agency, Problem Child Holdings, and says, “Hey, there’s a global shortage of GPUs, and we have a bunch. Can we borrow against them?” I might respond, “Well, I don’t really know if there’s a market for these and I’m scared you might be riff raff. Let’s do a 15 percent interest rate.” WarSieve doesn’t have better options, so it agr

Dec 22, 2025 Read →

Explore Other Paths

View All Paths →