Attention Gmail users! Google warns of hidden AI scam stealing passwords, 1.8 billion accounts vulnerable

Google has warned its 1.8 billion Gmail users worldwide of a new cybersecurity threat that exploits advances in artificial intelligence, reported Men’s Journal.
What are Indirect Prompt Injections?
The company has reportedly raised the alarm overindirect prompt injections, a form of attack that it says could target individuals, businesses and even governments.
In a recent blog post, Google explained that unlike direct prompt injections, where hackers enter malicious commands into an AI tool, indirect attacks involve hiding harmful instructions within external sources such as emails, documents or calendar invites. Once processed, these instructions can trick the system into exposing sensitive information or carrying out unauthorised actions.
“With the rapid adoption of generative AI, a new wave of threats is emerging across the industry,” the company wrote, warning that the risk becomes more significant as AI is used more widely for professional and personal tasks.
Experts explain the risk
Technology expert Scott Polderman toldThe Daily Record that attackers are exploiting Gemini, Google’s own AI chatbot, to conduct such scams. He explained that malicious code can be concealed within an email and, when read by Gemini, used to extract login details without the user realising.
“The danger is that people don’t need to click on anything,” Polderman said. “Hidden instructions can cause the AI to reveal passwords and other data, effectively turning the system against itself.”
Google has reportedly said it has already begun rolling out new protections. These include strengthening its Gemini 2.5 model, introducing machine-learning systems to spot suspicious prompts, and adding wider security measures at the system level. According to the company, these layers are designed to raise the difficulty and expense of such attacks, forcing cybercriminals to use less subtle and more detectable methods.
The warning comes amid growing concern about how artificial intelligence could be manipulated for malicious purposes, highlighting the potential risks of embedding AI tools into everyday services relied upon by billions of users worldwide.