Google Cloud reveals how AI Is reshaping cybersecurity defense

News Room

In Google’s sleek Singapore office at Block 80, Level 3, Mark Johnston stood before a room of technology journalists at 1:30 PM with a startling admission: after five decades of cybersecurity evolution, defenders are still losing the war. “In 69% of incidents in Japan and Asia Pacific, organisations were notified of their own breaches by external entities,” the Director of Google Cloud’s Office of the CISO for Asia Pacific revealed, his presentation slide showing a damning statistic – most companies can’t even detect when they’ve been breached.

What unfolded during the hour-long “Cybersecurity in the AI Era” roundtable was an honest assessment of how Google Cloud AI technologies are attempting to reverse decades of defensive failures, even as the same artificial intelligence tools empower attackers with unprecedented capabilities.

Mark Johnston presenting Mandiant’s M-Trends data showing detection failures across Asia Pacific

The historical context: 50 years of defensive failure

The crisis isn’t new. Johnston traced the problem back to cybersecurity pioneer James B. Anderson’s 1972 observation that “systems that we use really don’t protect themselves” – a challenge that has persisted despite decades of technological advancement. “What James B Anderson said back in 1972 still applies today,” Johnston said, highlighting how fundamental security problems remain unsolved even as technology evolves.

The persistence of basic vulnerabilities compounds this challenge. Google Cloud’s threat intelligence data reveals that “over 76% of breaches start with the basics” – configuration errors and credential compromises that have plagued organisations for decades. Johnston cited a recent example: “Last month, a very common product that most organisations have used at some point in time, Microsoft SharePoint, also has what we call a zero-day vulnerability…and during that time, it was attacked continuously and abused.”

The AI arms race: Defenders vs. attackers

Google Cloud’s visualization of the “Defender’s Dilemma” showing the scale imbalance between attackers and defenders

Kevin Curran, IEEE senior member and professor of cybersecurity at Ulster University, describes the current landscape as “a high-stakes arms race” where both cybersecurity teams and threat actors employ AI tools to outmanoeuvre each other. “For defenders, AI is a valuable asset,” Curran explains in a media note. “Enterprises have implemented generative AI and other automation tools to analyse vast amounts of data in real time and identify anomalies.”

However, the same technologies benefit attackers. “For threat actors, AI can streamline phishing attacks, automate malware creation and help scan networks for vulnerabilities,” Curran warns. The dual-use nature of AI creates what Johnston calls “the Defender’s Dilemma.”

Google Cloud AI initiatives aim to tilt these scales in favour of defenders. Johnston argued that “AI affords the best opportunity to upend the Defender’s Dilemma, and tilt the scales of cyberspace to give defenders a decisive advantage over attackers.” The company’s approach centres on what they term “countless use cases for generative AI in defence,” spanning vulnerability discovery, threat intelligence, secure code generation, and incident response.

Project Zero’s Big Sleep: AI finding what humans miss

One of Google’s most compelling examples of AI-powered defence is Project Zero’s “Big Sleep” initiative, which uses large language models to identify vulnerabilities in real-world code. Johnston shared impressive metrics: “Big Sleep found a vulnerability in an open source library using Generative AI tools – the first time we believe that a vulnerability was found by an AI service.”

The program’s evolution demonstrates AI’s growing capabilities. “Last month, we announced we found over 20 vulnerabilities in different packages,” Johnston noted. “But today, when I looked at the big sleep dashboard, I found 47 vulnerabilities in August that have been found by this solution.”

The progression from manual human analysis to AI-assisted discovery represents what Johnston describes as a shift “from manual to semi-autonomous” security operations, where “Gemini drives most tasks in the security lifecycle consistently well, delegating tasks it can’t automate with sufficiently high confidence or precision.”

The automation paradox: Promise and peril

Google Cloud’s roadmap envisions progression through four stages: Manual, Assisted, Semi-autonomous, and Autonomous security operations. In the semi-autonomous phase, AI systems would handle routine tasks while escalating complex decisions to human operators. The ultimate autonomous phase would see AI “drive the security lifecycle to positive outcomes on behalf of users.”

Google Cloud’s roadmap for evolving from manual to autonomous AI security operations

However, this automation introduces new vulnerabilities. When asked about the risks of over-reliance on AI systems, Johnston acknowledged the challenge: “There is the potential that this service could be attacked and manipulated. At the moment, when you see tools that these agents are piped into, there isn’t a really good framework to authorise that that’s the actual tool that hasn’t been tampered with.”

Curran echoes this concern: “The risk to companies is that their security teams will become over-reliant on AI, potentially sidelining human judgment and leaving systems vulnerable to attacks. There is still a need for a human ‘copilot’ and roles need to be clearly defined.”

Real-world implementation: Controlling AI’s unpredictable nature

Google Cloud’s approach includes practical safeguards to address one of AI’s most problematic characteristics: its tendency to generate irrelevant or inappropriate responses. Johnston illustrated this challenge with a concrete example of contextual mismatches that could create business risks.

“If you’ve got a retail store, you shouldn’t be having medical advice instead,” Johnston explained, describing how AI systems can unexpectedly shift into unrelated domains. “Sometimes these tools can do that.” The unpredictability represents a significant liability for businesses deploying customer-facing AI systems, where off-topic responses could confuse customers, damage brand reputation, or even create legal exposure.

Google’s Model Armor technology addresses this by functioning as an intelligent filter layer. “Having filters and using our capabilities to put health checks on those responses allows an organisation to get confidence,” Johnston noted. The system screens AI outputs for personally identifiable information, filters content inappropriate to the business context, and blocks responses that could be “off-brand” for the organisation’s intended use case.

The company also addresses the growing concern about shadow AI deployment. Organisations are discovering hundreds of unauthorised AI tools in their networks, creating massive security gaps. Google’s sensitive data protection technologies attempt to address this by scanning in multiple cloud providers and on-premises systems.

The scale challenge: Budget constraints vs. growing threats

Johnston identified budget constraints as the primary challenge facing Asia Pacific CISOs, occurring precisely when organisations face escalating cyber threats. The paradox is stark: as attack volumes increase, organisations lack the resources to adequately respond.

“We look at the statistics and objectively say, we’re seeing more noise – may not be super sophisticated, but more noise is more overhead, and that costs more to deal with,” Johnston observed. The increase in attack frequency, even when individual attacks aren’t necessarily more advanced, creates a resource drain that many organisations cannot sustain.

The financial pressure intensifies an already complex security landscape. “They are looking for partners who can help accelerate that without having to hire 10 more staff or get larger budgets,” Johnston explained, describing how security leaders face mounting pressure to do more with existing resources while threats multiply.

Critical questions remain

Despite Google Cloud AI’s promising capabilities, several important questions persist. When challenged about whether defenders are actually winning this arms race, Johnston acknowledged: “We haven’t seen novel attacks using AI to date,” but noted that attackers are using AI to scale existing attack methods and create “a wide range of opportunities in some aspects of the attack.”

The effectiveness claims also require scrutiny. While Johnston cited a 50% improvement in incident report writing speed, he admitted that accuracy remains a challenge: “There are inaccuracies, sure. But humans make mistakes too.” The acknowledgement highlights the ongoing limitations of current AI security implementations.

Looking forward: Post-quantum preparations

Beyond current AI implementations, Google Cloud is already preparing for the next paradigm shift. Johnston revealed that the company has “already deployed post-quantum cryptography between our data centres by default at scale,” positioning for future quantum computing threats that could render current encryption obsolete.

The verdict: Cautious optimism required

The integration of AI into cybersecurity represents both unprecedented opportunity and significant risk. While the AI technologies by Google Cloud demonstrate genuine capabilities in vulnerability detection, threat analysis, and automated response, the same technologies empower attackers with enhanced capabilities for reconnaissance, social engineering, and evasion.

Curran’s assessment provides a balanced perspective: “Given how quickly the technology has evolved, organisations will have to adopt a more comprehensive and proactive cybersecurity policy if they want to stay ahead of attackers. After all, cyberattacks are a matter of ‘when,’ not ‘if,’ and AI will only accelerate the number of opportunities available to threat actors.”

The success of AI-powered cybersecurity ultimately depends not on the technology itself, but on how thoughtfully organisations implement these tools while maintaining human oversight and addressing fundamental security hygiene. As Johnston concluded, “We should adopt these in low-risk approaches,” emphasising the need for measured implementation rather than wholesale automation.

The AI revolution in cybersecurity is underway, but victory will belong to those who can balance innovation with prudent risk management – not those who simply deploy the most advanced algorithms.

See also: Google Cloud unveils AI ally for security teams

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Read the full article here

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *