🧠 AI Pulse 6 min read

OpenAI’s Child Exploitation Reports Increased Sharply This Year

By Maddy Varner

Monday, December 22, 2025

🎯 Reading Level
Balanced depth and clarity
What is this?

Same facts, different comprehension paths.

  • Simple (L1): Clear language, concrete examples, short sentences
  • Standard (L2): Balanced depth with context and data
  • Advanced (L3): Full complexity, nuance, and academic rigor

All versions contain the same verified facts — only the presentation differs.

Advertisement

Ad slot: header

OpenAI’s Child Exploitation Reports Increased Sharply This Year

AI illustration

Maddy Varner Business Dec 22, 2025 11:32 AM OpenAI’s Child Exploitation Reports Increased Sharply This Year The company made 80 times as many reports to the National Center for Missing & Exploited Children during the first six months of 2025 as it did in the same period a year prior. Photo-Illustration: WIRED Staff; Getty Images Save Story Save this story Save Story Save this story OpenAI sent 80 times as many child exploitation incident reports to the National Center for Missing & Exploited Children during the first half of 2025 as it did during a similar time period in 2024, according to a recent update from the company. The NCMEC’s CyberTipline is a Congressionally authorized clearinghouse for reporting child sexual abuse material (CSAM) and other forms of child exploitation. Companies are required by law to report apparent child exploitation to the CyberTipline. When a company sends a report, NCMEC reviews it and then forwards it to the appropriate law enforcement agency for investigation. Statistics related to NCMEC reports can be nuanced. Increased reports can sometimes indicate changes in a platform’s automated moderation, or the criteria it uses to decide whether a report is necessary, rather than necessarily indicating an increase in nefarious activity. Additionally, the same piece of content can be the subject of multiple reports, and a single report can be about multiple pieces of content. Some platforms, including OpenAI, disclose the number of both the reports and the total pieces of content they were about for a more complete picture. OpenAI spokesperson Gaby Raila said in a statement that the company made investments toward the end of 2024 “to increase [its] capacity to review and action reports in order to keep pace with current and future user growth.” Raila also said that the time frame corresponds to “the introduction of more product surfaces that allowed image uploads and the growing popularity of our products, which contributed to the increase in reports.” In August, Nick Turley, vice president and head of ChatGPT, announced that the app had four times the amount of weekly active users than it did the year before. During the first half of 2025, the number of CyberTipline reports OpenAI sent was roughly the same as the amount of content OpenAI sent the reports about—75,027 compared to 74,559. In the first half of 2024, it sent 947 CyberTipline reports about 3,252 pieces of content. Both the number of reports and pieces of content the reports saw a marked increase between the two time periods. Content, in this context, could mean multiple things. OpenAI has said that it reports all instances of CSAM, including uploads and requests, to NCMEC. Besides its ChatGPT app, which allows users to upload files—including images—and can generate text and images in response, OpenAI also offers access to its models via API access. The most recent NCMEC count wouldn’t include any reports related to video-generation app Sora , as its September release was after the time frame covered by the update. The spike in reports follows a similar pattern to what NCMEC has observed at the CyberTipline more broadly with the rise of generative AI. The center’s analysis of all CyberTipline data found that reports involving generative AI saw a 1,325 percent increase between 2023 and 2024. NCMEC has not yet released 2025 data, and while other large AI labs like Google publish statistics about the NCMEC reports they’ve made, they don’t specify what percentage of those reports are AI-related. OpenAI’s update comes at the end of a year where the company and its competitors have faced increased scrutiny over child safety issues beyond just CSAM. Over the summer, 44 state attorneys general sent a joint letter to multiple AI companies including OpenAI, Meta, Character.AI, and Google, warning that they would “use every facet of our authority to protect children from exploitation by predatory artificial intelligence products.” Both OpenAI and Character.AI have faced multiple lawsuits from families or on behalf of individuals who allege that the chatbots contributed to their children’s deaths. In the fall, the US Senate Committee on the Judiciary held a hearing on the harms of AI chatbots, and the US Federal Trade Commission launched a market study on AI companion bots that included questions about how companies are mitigating negative impacts, particularly to children. (I was previously employed by the FTC and was assigned to work on the market study prior to leaving the agency.) In recent months, OpenAI has rolled out new safety-focused tools more broadly. In September, OpenAI rolled out several new features for ChatGPT, including parental controls , as part of its work “to give families tools to support their teens’ use of AI.” Parents and their teens can link their accounts, and parents can change their teen’s settings, including by turning off voice mode and memory, removing the ability for ChatGPT to generate images, and opting their kid out of model training. OpenAI said it could also notify parents if their teen’s conversations showed signs of self-harm, and potentially also notify law enforcement if it detected an imminent threat to life and wasn’t able to get in touch with a parent. In late October, to cap off negotiations with the California Department of Justice over its proposed recapitalizations plan, OpenAI agreed to “continue to undertake measures to mitigate risks to teens and others in connection with the development and deployment of AI and of AGI.” The following month, OpenAI released its Teen Safety Blueprint , in which it said it was constantly improving its ability to detect child sexual abuse and exploitation material, and reporting confirmed CSAM to relevant authorities, including NCMEC.

📰 Full Article: This is a summary. Read the complete article at the original source →

Advertisement

Ad slot: in-article

Source

This article was originally published by Maddy Varner. Read the original at wired.com

Emergent News aggregates and curates content from trusted sources to help you understand reality clearly.

Related Articles

AI-Powered Dating Is All Hype. IRL Cruising Is the Future
ai-pulse 6 min

AI-Powered Dating Is All Hype. IRL Cruising Is the Future

Dating apps and AI companies have been touting bot wingmen for months. But the future might just be good old-fashioned meet-cutes.

Dec 31, 2025 Read →
The Great Big Power Play AI illustration
ai-pulse 7 min

The Great Big Power Play

Molly Taft Science Dec 30, 2025 6:00 AM The Great Big Power Play US support for nuclear energy is soaring. Meanwhile, coal plants are on their way out and electricity-sucking data centers are meeting huge pushback. Welcome to the next front in the energy battle. PHOTO-ILLUSTRATION: WIRED STAFF; GETTY IMAGES Save Story Save this story Save Story Save this story Take yourself back to 2017. Get Out and The Shape of Water were playing in theaters, Zohran Mamdani was still known as rapper Young Cardamom, and the Trump administration, freshly in power, was eager to prop up its favored energy sources. That year, the administration introduced a series of subsidies for struggling coal-fired power plants and nuclear power plants, which were facing increasing price pressures from gas and cheap renewables. The plan would have put taxpayers on the hook for billions of dollars. It didn’t work. In subsequent years, the nuclear industry kept running into roadblocks. Three nuclear plants have shut down since 2020, while construction of two of the only four reactors started since 2000 was put on hold after a decade and billions of dollars following a political scandal . Coal, meanwhile, continued its long decline: It comprises just 17 percent of the US power mix , down from a high of 45 percent in 2010. Now, both of these energy sources are getting second chances. The difference this time is the buzz around AI , but it isn’t clear that the outcome will be much different. expired: canceling nuclear tired: canceling coal wired: the free market Read more Expired/Tired/WIRED 2025 stories here . Throughout 2025, the Trump administration has not just gone all in on promoting nuclear, but positioned it specifically as a solution to AI’s energy needs. In May, the president signed a series of executive orders intended to boost nuclear energy in the US, including ordering 10 new large reactors to be constructed by 2030. A pilot program at the Department of Energy created as a result of May’s executive orders—coupled with a serious reshuffling of the country’s nuclear regulator—has already led to breakthroughs from smaller startups. Energy secretary Chris Wright said in September that AI’s progress “will be accelerated by rapidly unlocking and deploying commercial nuclear power.” The administration’s push is mirrored by investments from tech companies. Giants like Google, Amazon, and Microsoft have inked numerous deals in recent years with nuclear companies to power data centers; Microsoft even joined the World Nuclear Association. Multiple retired reactors in the US are being considered for restarts—including two of the three that have closed in the past five years—with the tech industry supporting some of these arrangements . (This includes Microsoft’s high-profile restart of the infamous Three Mile Island, which is also being backed by a $1 billion loan from the federal government.) It’s a good time for both the private and public sectors to push nuclear: public support for nuclear power is the highest it’s been since 2010. Despite all of this, the practicalities of nuclear energy leave its future in doubt. Most of nuclear’s costs come not from onerous regulations but from construction . Critics are wary of juiced-up valuations for small modular reactor companies, especially those with deep connections to the Trump administration. An $80 billion deal the government struck with reactor giant Westinghouse in October is light on details, leaving more questions than answers for the industry. And despite high-profile tech deals that promise to get reactors up and running in a few years, the timelines remain tricky. Still, insiders say that this year marked a turning point. “Nuclear technology has been seen by proponents as the neglected and unjustly villainized hero of the energy world,” says Brett Rampal, a nuclear power expert who advises investors. “Now, full-throated support from the president, Congress, tech companies, and the common person feels like generational restitution and a return to meritocracy.” Nuclear isn’t the only form of energy that seems to be getting a second start thanks to AI. In April, President Trump signed a series of executive orders to boost US coal to power AI; Wright has since ordered two plants that were slated to be retired to stay online via emergency order. The administration has also scrambled to make it easier to run coal plants, in particular focusing on doing away with pollution regulation. These efforts—and the endless demand for energy from AI—may have extended a lifeline to coal: More than two dozen generating units that were scheduled to retire across the country are now staying online , separate from Wright’s order, with some getting yearslong reprieves. A complete recovery for the industry, however, is still an open question. A recent analysis of the US power sector finds that almost all of the 10 largest utilities in the US are significantly slashing their reliance on coal. (Many of these utilities, the analysis shows, have been looking to replace coal-fired power with more nuclear.) Part of what may keep coal on its downward track in the US—albeit with an extended lifeline—is simply its bad PR. The tech of the future, after all, isn’t supposed to pollute the air and drive temperatures up; while AI has significantly set Big Tech back from its climate-change goals, these companies are theoretically still committed to not frying the planet. And while tech giants are scrambling to align themselves with nuclear, which does not produce direct carbon emissions, no big companies have openly partnered with a struggling coal plant or splashed out a press release about how they’re seeking to produce more energy from coal. (Some retired coal plants are being proposed as sites for data centers, powered by natural gas.) Some companies are trying to develop technologies that would capture carbon emissions from coal plants, but outlook for those technologies is bearish following some high-profile failures. “Emissions [are] always going to factor into the discussion” for investors, says Rampal. The Oval Office playing favorites with energy sources doesn’t mean that it can defeat the market. Utility-scale solar and onshore wind remain some of the cheapest forms of energy around, even without government subsidies. And while Washington looks backward, other countries are continuing massive buildouts of renewable energy. China’s emissions have taken a nosedive over the past 18 months, thanks in large part to a huge expansion of renewable energy. Coal’s use in the power sector is declining due to competition from renewables, while nuclear made up only a small slice of total power use. If the administration’s goal is to defeat China on AI, it might want to start by taking a look at its energy playbook.

Dec 30, 2025 Read →
3 New Tricks to Try With Google Gemini Live After Its Latest Major Upgrade AI illustration
ai-pulse 5 min

3 New Tricks to Try With Google Gemini Live After Its Latest Major Upgrade

David Nield Gear Dec 29, 2025 6:00 AM 3 New Tricks to Try With Google Gemini Live After Its Latest Major Upgrade Google's AI is now even smarter, and more versatile. Photo-Illustration: Wired Staff; Getty Images Comment Loader Save Story Save this story Comment Loader Save Story Save this story Gemini Live is the more conversational, natural language way of interacting with the Google Gemini AI bot using your voice. The idea is you chat with it like you would chat with a friend, interruptions and all, even if the actual answers are the same as you'd get from typing your queries into Gemini as normal. Now, about a year and a half after its debut, Gemini Live has been given what Google is describing as its “biggest update ever.” The update makes the Gemini Live mode even more natural and even more conversational than before, with a better understanding of tone, nuance, pronunciation, and rhythm. There's no real visible indication that anything has changed, and often a lot of the responses will seem the same as before too. However, there are certain areas where you can tell the difference that the latest upgrade has made—and so here's how to make the most of the new and improved Gemini Live. The update is rolling out now for Gemini on Android and iOS . To access Gemini Live, launch the Gemini app, then tap the Live button in the lower right hand corner (it looks vaguely like a sound wave) and start talking. The Gemini Live interface. David Nield Hear Some Stories Gemini Live can now add more feeling and variation to its storytelling capabilities—which can be useful for history lessons, bedtimes for the children, and creative brainstorming. The AI will even add in different accents and tones where appropriate, to help you distinguish between the characters and scenes. One of Google's own examples for how this works best is to get Gemini Live to tell you the story of the Roman Empire from the perspective of Julius Caesar. It's a challenge for Gemini that requires some leaps in perspective and imagination, and to use tone and style appropriately in a way that Gemini Live should now be better at. You don't have to restrict yourself to Julius Caesar or the Roman Empire either. You could get Gemini Live to give you a retelling of Pride and Prejudice from the perspective of each different Bennett sister, for example, or have the AI spin up a tale of what life would have been like in your part of the world 100, 200, or 300 years ago. Learn Some Skills Another area where Gemini Live's new capabilities make a noticeable difference is in educating and explaining: You can get it to give you a crash course (or a longer tutorial) on any topic of your choosing, anything from the intricacies of human genetics to the best ways to clean a carpet. You can even get Gemini Live to teach you a language . The AI can now go at a pace to suit you, which is particularly useful when you're trying to learn something new. If you need Gemini Live to slow down, speed up, or repeat something, then just say so. If you've only got a certain amount of time spare, let Gemini know when you're chatting to it. As usual, be wary of AI hallucinations , and maybe don't trust that everything you hear is fully accurate or verified. If you're wanting to learn something like how to rewire the lighting in your home or fix a problematic car engine, double-check the guidance you're getting with other sources, but Gemini Live is at least a useful starting point. Test Some Accents One of the new skills that Gemini Live has with this latest update is the ability to speak in different accents. Perhaps you want the history of the Wild West spoken by a cowboy, or you need the intricacies of the British Royal Family explained by someone with an authentic London accent. Gemini Live can now handle these requests. This extends to the language learning mentioned above, because you can hear words and phrases spoken as they would be by native speakers—and then try to copy the pronunciation and phrasing. While Gemini Live doesn't cover every language and accent across the globe, it can access plenty of them. There are certain safeguards built into Gemini Live here, and your requests might get refused if you veer too close to derogatory uses of accents and speech, or if you're trying to impersonate real people. However, it's another fun way to test out the AI, and to get responses that are more varied and personalized.

Dec 29, 2025 Read →