According to Google, hackers are abusing Gemini to accelerate cyberattacks, and it’s not just limited to cheesy phishing spam. A new report from the Google Threat Intelligence Group says state-backed groups have used Gemini in multiple phases of an operation, from early target research to post-compromise work.
The activity includes clusters related to China, Iran, North Korea and Russia. According to Google, the prompts and results observed included profiling, social engineering copying, translation, coding assistance, vulnerability testing and debugging when tools break during an intrusion. Quick help with routine tasks can still change the outcome.
AI help, same old playbook
Google researchers describe the use of AI as acceleration and not as magic. Attackers are already conducting reconnaissance, drawing decoys, optimizing malware and chasing bugs. Gemini can shorten this loop, especially when operators need quick rewrites, language assistance or code corrections under pressure.
The report details China-related activity in which an operator assumed an expert cybersecurity persona and pushed Gemini to automate vulnerability assessment and create targeted testing plans in a contrived scenario. Google also says a China-based actor repeatedly used Gemini for debugging, research and technical consulting purposes related to intrusions. It’s less about new tactics and more about fewer speed bumps.
The risk is not just phishing
The big change is the pace. When groups can iterate more quickly in targeting and equipment, defenders have less time between early signals and actual damage. This also means there are fewer obvious breaks where errors, delays or repetitive manual work could show up in logs.
Google also points out another threat that looks nothing like classic scams: model extraction and knowledge distillation. In this scenario, actors with authorized API access bombard the system with requests to reproduce its performance and rationale, and then use that knowledge to train another model. Google portrays it as harm to commercial and intellectual property, with potential downstream risk as it scales, including an example with 100,000 prompts aimed at reproducing behavior on non-English tasks.
What to watch next
Google says it has disabled accounts and infrastructure related to documented Gemini abuse and has added targeted defenses to the Gemini classifiers. It also says it continues to conduct testing and rely on safety guardrails.
The practical takeaway for security teams is that AI-powered attacks are faster and not necessarily smarter. Track sudden improvements in decoy quality, faster tool iterations, and unusual API usage patterns, then tighten response runbooks so speed doesn’t become the attacker’s greatest advantage.




