🧠 AI Pulse 7 min read

New York’s landmark AI safety bill was defanged — and universities were part of the push against it

By Hayden Field

Tuesday, December 23, 2025

🎯 Reading Level
Balanced depth and clarity
What is this?

Same facts, different comprehension paths.

  • Simple (L1): Clear language, concrete examples, short sentences
  • Standard (L2): Balanced depth with context and data
  • Advanced (L3): Full complexity, nuance, and academic rigor

All versions contain the same verified facts — only the presentation differs.

Advertisement

Ad slot: header

New York’s landmark AI safety bill was defanged — and universities were part of the push against it

AI illustration

AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Policy Close Policy Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Policy Report Close Report Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Report New York’s landmark AI safety bill was defanged — and universities were part of the push against it A group including Big Tech players and major universities fought against the RAISE Act, which got a last-minute rewrite. A group including Big Tech players and major universities fought against the RAISE Act, which got a last-minute rewrite. by Hayden Field Close Hayden Field Senior AI Reporter Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Hayden Field Dec 23, 2025, 4:18 PM UTC Link Share Hayden Field Close Hayden Field Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Hayden Field is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets. A group of tech companies and academic institutions spent tens of thousands of dollars in the past month — likely between $17,000 and $25,000 — on an ad campaign against New York’s landmark AI safety bill, which may have reached more than two million people, according to Meta’s Ad Library. The landmark bill is called the RAISE Act, or the Responsible AI Safety and Education Act, and days ago, a version of it was signed by New York Governor Kathy Hochul. The closely watched law dictates that AI companies developing large models — OpenAI, Anthropic, Meta, Google, DeepSeek, etc. — must outline safety plans and transparency rules for reporting large-scale safety incidents to the attorney general. But the version Hochul signed — different than the one passed in both the New York State Senate and the Assembly in June — was a rewrite that made it much more favorable to tech companies. A group of more than 150 parents had sent the governor a letter urging her to sign the bill without changes. And the group of tech companies and academic institutions, called the AI Alliance, were part of the charge to defang it. The AI Alliance — the organization behind the opposition ad campaign — counts Meta, IBM, Intel, Oracle, Snowflake, Uber, AMD, Databricks, and Hugging Face among its members, which is not necessarily surprising. The group sent a letter in June to New York lawmakers about its “deep concern” about the bill and deemed it “unworkable.” But the group isn’t just made up of tech companies. Its members include a number of colleges and universities all around the world, including New York University, Cornell University, Dartmouth College, Carnegie Mellon University, Northeastern University, Louisiana State University, and the University of Notre Dame, as well as Penn Engineering and Yale Engineering. The ads began on November 23 and ran with the title, “The RAISE Act will stifle job growth.” They said that the legislation “would slow down the New York technology ecosystem powering 400,000 high-tech jobs and major investments. Rather than stifling innovation, let’s champion a future where AI development is open, trustworthy, and strengthens the Empire State.” When The Verge asked the academic institutions listed above whether they were aware they had been inadvertently part of an ad campaign against widely discussed AI safety legislation, none responded to a request for comment, besides Northeastern, which did not provide a comment by publication time. In recent years, OpenAI and its competitors have increasingly been courting academic institutions to be part of research consortiums or offering technology directly to students for free. Many of the academic institutions that are part of the AI Alliance aren’t directly involved in one-on-one partnerships with AI companies, but some are. For instance, Northeastern’s partnership with Anthropic this year translated to Claude access for 50,000 students, faculty, and staff across 13 global campuses, per Anthropic’s announcement in April . In 2023, OpenAI funded a journalism ethics initiative at NYU. Dartmouth announced a partnership with Anthropic earlier this month , a Carnegie Mellon University professor currently serves on OpenAI’s board, and Anthropic has funded programs at Carnegie Mellon. The initial version of the RAISE Act stated that developers must not release a frontier model “if doing so would create an unreasonable risk of critical harm,” which the bill defines as the death or serious injury of 100 people or more, or $1 billion or more in damages to rights in money or property stemming from the creation of a chemical, biological, radiological, or nuclear weapon. That definition also extends to an AI model that “acts with no meaningful human intervention” and “would, if committed by a human,” fall under certain crimes. The version Hochul signed removed this clause . Hochul also increased the deadline for disclosure for safety incidents and lessened fines, among other changes. The AI Alliance has lobbied previously against AI safety policies, including the RAISE Act, California’s SB 1047 , and President Biden’s AI executive order . It states that its mission is to “bring together builders and experts from various fields to collaboratively and transparently address the challenges of generative AI and democratize its benefits,” especially via “member-driven working groups.” Some of the group’s projects beyond lobbying have involved cataloguing and managing “trustworthy” datasets and creating a ranked list of AI safety priorities. The AI Alliance wasn’t the only organization opposing the RAISE Act with ad dollars. As The Verge wrote recently , Leading the Future, a pro-AI super PAC backed by Perplexity AI, Andreessen Horowitz (a16z), Palantir cofounder Joe Lonsdale, and OpenAI president Greg Brockman, has spent money on ads targeting the cosponsor of the RAISE Act, New York State Assemblymember Alex Bores. But Leading the Future is a super PAC with a clear agenda, whereas the AI Alliance is a nonprofit that’s partnered with a trade association — with the mission of “developing AI collaboratively, transparently, and with a focus on safety, ethics, and the greater good.” Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates. Hayden Field Close Hayden Field Senior AI Reporter Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Hayden Field AI Close AI Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI Policy Close Policy Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Policy Report Close Report Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Report Most Popular Most Popular GOG’s Steam-alternative PC game store is leaving CD Projekt, staying DRM-free Turn your PC into a Super Nintendo with Epilogue’s new USB dock LG is announcing its own Frame-style TV at CES Windows on Arm had another good year This experimental camera can focus on everything at once The Verge Daily A free daily digest of the news that matters most. Email (required) Sign Up By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Advertiser Content From This is the title for the native ad

📰 Full Article: This is a summary. Read the complete article at the original source →

Advertisement

Ad slot: in-article

Source

This article was originally published by Hayden Field. Read the original at theverge.com

Emergent News aggregates and curates content from trusted sources to help you understand reality clearly.

Related Articles

AI-Powered Dating Is All Hype. IRL Cruising Is the Future
ai-pulse 6 min

AI-Powered Dating Is All Hype. IRL Cruising Is the Future

Dating apps and AI companies have been touting bot wingmen for months. But the future might just be good old-fashioned meet-cutes.

Dec 31, 2025 Read →
The Great Big Power Play AI illustration
ai-pulse 7 min

The Great Big Power Play

Molly Taft Science Dec 30, 2025 6:00 AM The Great Big Power Play US support for nuclear energy is soaring. Meanwhile, coal plants are on their way out and electricity-sucking data centers are meeting huge pushback. Welcome to the next front in the energy battle. PHOTO-ILLUSTRATION: WIRED STAFF; GETTY IMAGES Save Story Save this story Save Story Save this story Take yourself back to 2017. Get Out and The Shape of Water were playing in theaters, Zohran Mamdani was still known as rapper Young Cardamom, and the Trump administration, freshly in power, was eager to prop up its favored energy sources. That year, the administration introduced a series of subsidies for struggling coal-fired power plants and nuclear power plants, which were facing increasing price pressures from gas and cheap renewables. The plan would have put taxpayers on the hook for billions of dollars. It didn’t work. In subsequent years, the nuclear industry kept running into roadblocks. Three nuclear plants have shut down since 2020, while construction of two of the only four reactors started since 2000 was put on hold after a decade and billions of dollars following a political scandal . Coal, meanwhile, continued its long decline: It comprises just 17 percent of the US power mix , down from a high of 45 percent in 2010. Now, both of these energy sources are getting second chances. The difference this time is the buzz around AI , but it isn’t clear that the outcome will be much different. expired: canceling nuclear tired: canceling coal wired: the free market Read more Expired/Tired/WIRED 2025 stories here . Throughout 2025, the Trump administration has not just gone all in on promoting nuclear, but positioned it specifically as a solution to AI’s energy needs. In May, the president signed a series of executive orders intended to boost nuclear energy in the US, including ordering 10 new large reactors to be constructed by 2030. A pilot program at the Department of Energy created as a result of May’s executive orders—coupled with a serious reshuffling of the country’s nuclear regulator—has already led to breakthroughs from smaller startups. Energy secretary Chris Wright said in September that AI’s progress “will be accelerated by rapidly unlocking and deploying commercial nuclear power.” The administration’s push is mirrored by investments from tech companies. Giants like Google, Amazon, and Microsoft have inked numerous deals in recent years with nuclear companies to power data centers; Microsoft even joined the World Nuclear Association. Multiple retired reactors in the US are being considered for restarts—including two of the three that have closed in the past five years—with the tech industry supporting some of these arrangements . (This includes Microsoft’s high-profile restart of the infamous Three Mile Island, which is also being backed by a $1 billion loan from the federal government.) It’s a good time for both the private and public sectors to push nuclear: public support for nuclear power is the highest it’s been since 2010. Despite all of this, the practicalities of nuclear energy leave its future in doubt. Most of nuclear’s costs come not from onerous regulations but from construction . Critics are wary of juiced-up valuations for small modular reactor companies, especially those with deep connections to the Trump administration. An $80 billion deal the government struck with reactor giant Westinghouse in October is light on details, leaving more questions than answers for the industry. And despite high-profile tech deals that promise to get reactors up and running in a few years, the timelines remain tricky. Still, insiders say that this year marked a turning point. “Nuclear technology has been seen by proponents as the neglected and unjustly villainized hero of the energy world,” says Brett Rampal, a nuclear power expert who advises investors. “Now, full-throated support from the president, Congress, tech companies, and the common person feels like generational restitution and a return to meritocracy.” Nuclear isn’t the only form of energy that seems to be getting a second start thanks to AI. In April, President Trump signed a series of executive orders to boost US coal to power AI; Wright has since ordered two plants that were slated to be retired to stay online via emergency order. The administration has also scrambled to make it easier to run coal plants, in particular focusing on doing away with pollution regulation. These efforts—and the endless demand for energy from AI—may have extended a lifeline to coal: More than two dozen generating units that were scheduled to retire across the country are now staying online , separate from Wright’s order, with some getting yearslong reprieves. A complete recovery for the industry, however, is still an open question. A recent analysis of the US power sector finds that almost all of the 10 largest utilities in the US are significantly slashing their reliance on coal. (Many of these utilities, the analysis shows, have been looking to replace coal-fired power with more nuclear.) Part of what may keep coal on its downward track in the US—albeit with an extended lifeline—is simply its bad PR. The tech of the future, after all, isn’t supposed to pollute the air and drive temperatures up; while AI has significantly set Big Tech back from its climate-change goals, these companies are theoretically still committed to not frying the planet. And while tech giants are scrambling to align themselves with nuclear, which does not produce direct carbon emissions, no big companies have openly partnered with a struggling coal plant or splashed out a press release about how they’re seeking to produce more energy from coal. (Some retired coal plants are being proposed as sites for data centers, powered by natural gas.) Some companies are trying to develop technologies that would capture carbon emissions from coal plants, but outlook for those technologies is bearish following some high-profile failures. “Emissions [are] always going to factor into the discussion” for investors, says Rampal. The Oval Office playing favorites with energy sources doesn’t mean that it can defeat the market. Utility-scale solar and onshore wind remain some of the cheapest forms of energy around, even without government subsidies. And while Washington looks backward, other countries are continuing massive buildouts of renewable energy. China’s emissions have taken a nosedive over the past 18 months, thanks in large part to a huge expansion of renewable energy. Coal’s use in the power sector is declining due to competition from renewables, while nuclear made up only a small slice of total power use. If the administration’s goal is to defeat China on AI, it might want to start by taking a look at its energy playbook.

Dec 30, 2025 Read →
3 New Tricks to Try With Google Gemini Live After Its Latest Major Upgrade AI illustration
ai-pulse 5 min

3 New Tricks to Try With Google Gemini Live After Its Latest Major Upgrade

David Nield Gear Dec 29, 2025 6:00 AM 3 New Tricks to Try With Google Gemini Live After Its Latest Major Upgrade Google's AI is now even smarter, and more versatile. Photo-Illustration: Wired Staff; Getty Images Comment Loader Save Story Save this story Comment Loader Save Story Save this story Gemini Live is the more conversational, natural language way of interacting with the Google Gemini AI bot using your voice. The idea is you chat with it like you would chat with a friend, interruptions and all, even if the actual answers are the same as you'd get from typing your queries into Gemini as normal. Now, about a year and a half after its debut, Gemini Live has been given what Google is describing as its “biggest update ever.” The update makes the Gemini Live mode even more natural and even more conversational than before, with a better understanding of tone, nuance, pronunciation, and rhythm. There's no real visible indication that anything has changed, and often a lot of the responses will seem the same as before too. However, there are certain areas where you can tell the difference that the latest upgrade has made—and so here's how to make the most of the new and improved Gemini Live. The update is rolling out now for Gemini on Android and iOS . To access Gemini Live, launch the Gemini app, then tap the Live button in the lower right hand corner (it looks vaguely like a sound wave) and start talking. The Gemini Live interface. David Nield Hear Some Stories Gemini Live can now add more feeling and variation to its storytelling capabilities—which can be useful for history lessons, bedtimes for the children, and creative brainstorming. The AI will even add in different accents and tones where appropriate, to help you distinguish between the characters and scenes. One of Google's own examples for how this works best is to get Gemini Live to tell you the story of the Roman Empire from the perspective of Julius Caesar. It's a challenge for Gemini that requires some leaps in perspective and imagination, and to use tone and style appropriately in a way that Gemini Live should now be better at. You don't have to restrict yourself to Julius Caesar or the Roman Empire either. You could get Gemini Live to give you a retelling of Pride and Prejudice from the perspective of each different Bennett sister, for example, or have the AI spin up a tale of what life would have been like in your part of the world 100, 200, or 300 years ago. Learn Some Skills Another area where Gemini Live's new capabilities make a noticeable difference is in educating and explaining: You can get it to give you a crash course (or a longer tutorial) on any topic of your choosing, anything from the intricacies of human genetics to the best ways to clean a carpet. You can even get Gemini Live to teach you a language . The AI can now go at a pace to suit you, which is particularly useful when you're trying to learn something new. If you need Gemini Live to slow down, speed up, or repeat something, then just say so. If you've only got a certain amount of time spare, let Gemini know when you're chatting to it. As usual, be wary of AI hallucinations , and maybe don't trust that everything you hear is fully accurate or verified. If you're wanting to learn something like how to rewire the lighting in your home or fix a problematic car engine, double-check the guidance you're getting with other sources, but Gemini Live is at least a useful starting point. Test Some Accents One of the new skills that Gemini Live has with this latest update is the ability to speak in different accents. Perhaps you want the history of the Wild West spoken by a cowboy, or you need the intricacies of the British Royal Family explained by someone with an authentic London accent. Gemini Live can now handle these requests. This extends to the language learning mentioned above, because you can hear words and phrases spoken as they would be by native speakers—and then try to copy the pronunciation and phrasing. While Gemini Live doesn't cover every language and accent across the globe, it can access plenty of them. There are certain safeguards built into Gemini Live here, and your requests might get refused if you veer too close to derogatory uses of accents and speech, or if you're trying to impersonate real people. However, it's another fun way to test out the AI, and to get responses that are more varied and personalized.

Dec 29, 2025 Read →