AI Risks Escalate as Hackers Crack 600 Firewalls, Threaten GRC Strategies

Companies must reassess compliance and risk management as AI-powered threats intensify

AI-Synthesized from 2 sources

By Emergent News Desk

Wednesday, February 25, 2026

AI Risks Escalate as Hackers Crack 600 Firewalls, Threaten GRC Strategies

Unsplash

Companies must reassess compliance and risk management as AI-powered threats intensify

The rapid evolution of Artificial Intelligence (AI) is transforming the cybersecurity landscape, forcing companies to reassess their Governance, Risk & Compliance (GRC) strategies. A recent hacking incident, in which a Russian-speaking hacker compromised over 600 Fortigate firewalls using a Google Gemini-based AI tool, has underscored the urgency of this challenge.

According to a report by Check Point, a security provider, every 80th request sent from corporate devices to Generative AI (GenAI) services poses a high risk of sensitive data loss. Chief Information Security Officers (CISOs) face the daunting task of keeping pace with innovation demands while securing AI adoption and mitigating associated risks.

The hacking incident, reported by Amazon Web Services (AWS) security researchers, involved a sophisticated attack that exploited weak passwords to gain initial access to firewalls. The hacker then used an AI-powered reconnaissance tool, written in Go and Python, to compromise additional devices within the same network.

CJ Moses, CISO at Amazon Integrated Security, noted that the analysis of the tool's source code revealed clear evidence of AI-assisted development. "Redundant comments that merely repeated function names, a simple architecture with obvious naming conventions, and an overall lack of sophistication in the code suggest that the tool was developed with the aid of AI," Moses explained.

The incident highlights the growing threat of AI-powered attacks, which can evade traditional security measures and exploit vulnerabilities in unexpected ways. As companies increasingly integrate AI into their operations, they must also reassess their GRC strategies to address the unique risks associated with AI adoption.

Governance, Risk & Compliance is a concept that emerged in the early 2000s, primarily focused on ensuring regulatory compliance and managing risks. However, the integration of AI into GRC processes requires a more nuanced approach, one that balances innovation with risk management.

To effectively mitigate AI-powered threats, companies must develop a comprehensive GRC strategy that incorporates AI-specific risk assessments, compliance frameworks, and security measures. This includes implementing robust password policies, regularly updating software and systems, and investing in AI-powered security tools that can detect and respond to emerging threats.

In conclusion, the escalating risks associated with AI adoption demand a proactive and adaptive approach to GRC. By acknowledging the unique challenges posed by AI and developing strategies to address them, companies can ensure the secure integration of AI into their operations and maintain a competitive edge in an increasingly complex cybersecurity landscape.

AI-Synthesized Content

This article was synthesized by Fulqrum AI from 2 trusted sources, combining multiple perspectives into a comprehensive summary. All source references are listed below.

Fact-checked
Real-time synthesis
Bias-reduced

Source Perspective Analysis

Diversity:Limited
Far LeftLeftLean LeftCenterLean RightRightFar Right

About Bias Ratings: Source bias positions are based on aggregated data from AllSides, Ad Fontes Media, and MediaBiasFactCheck. Ratings reflect editorial tendencies, not the accuracy of individual articles. Credibility scores factor in fact-checking, correction rates, and transparency.

Emergent News aggregates and curates content from trusted sources to help you understand reality clearly.

Powered by Fulqrum , an AI-powered autonomous news platform.