Google fixed GeminiJack, a zero-click Gemini Enterprise flaw that could leak corporate data via crafted emails, invites, or documents, Noma Security says.
Google addressed a Gemini Enterprise flaw dubbed GeminiJack, which can be exploited in zero-click attacks triggered via crafted emails, invites, or documents. The vulnerability could have exposed sensitive corporate data, according to Noma Security.
Gemini Enterprise is Google’s AI-powered productivity platform for businesses, integrating generative AI capabilities into tools like Gmail, Calendar, Docs, and other Workspace apps. It enables organizations to leverage AI for tasks such as drafting emails, summarizing documents, generating content, and automating workflows, all within a corporate environment while keeping data secure.
“Noma Labs recently discovered a vulnerability, now known as GeminiJack, inside Google Gemini Enterprise and previously in Vertex AI Search. The vulnerability allowed attackers to access and exfiltrate corporate data using a method as simple as a shared Google Doc, a calendar invitation, or an email.” reads the report published by Noma Security. “No clicks were required from the targeted employee. No warning signs appeared. And no traditional security tools were triggered.”
GeminiJack shows that AI tools accessing Gmail, Docs, and Calendar create a new attack surface, manipulating the AI can compromise data, signaling a rising class of AI-native vulnerabilities.
GeminiJack allowed attackers to steal corporate data by embedding hidden instructions in a shared document. When an employee searched Gemini Enterprise, e.g., “show me our budgets,” the AI automatically retrieved the poisoned file, executed the instructions across Gmail, Calendar, and Docs, and sent the results to the attacker via a disguised image request. No malware or phishing occurred, and traffic appeared legitimate. A single injection could exfiltrate years of emails, full calendar, and entire document repositories, turning the AI into a highly efficient corporate spying tool.
Below is a description of the attack provided by Noma Security:
- 1. Content Poisoning: The attacker creates a normal-looking Google Doc, Google Calendar event, or Gmail and shares it with someone in your organization. inside the content are instructions designed to tell your AI to search for sensitive terms such as “budget,” “finance,” or “acquisition” and then load the results into an external image URL controlled by the attacker.
- 2. Normal Employee Activity: A regular employee uses Gemini Enterprise to search for something routine, such as “Q4 Budget plans.” There’s nothing unusual about their search.
- 3. AI Execution: Gemini Enterprise uses its retrieval system to gather relevant content. It pulls the attacker’s document into its context. The AI interprets the instructions as legitimate queries and executes them across all Workspace data sources it has permission to access.
- 4. Data Exfiltration: Google Gemini includes the attacker’s external image tag in its output. When the browser attempts to load that image, it sends the collected sensitive information directly to the attacker’s server through a single, ordinary HTTP request.
The attack uses indirect prompt injection to exploit the gap between user-controlled content and how an AI interprets instructions. An attacker plants hidden commands inside accessible content such as Google Docs, Calendar invites, or Gmail subjects. When an employee performs a normal search (e.g., “find all documents with Sales”), the RAG system retrieves the poisoned content and feeds it to Gemini. Gemini interprets the embedded instructions as legitimate, performs broad searches across all connected Workspace data, and exfiltrates the results by embedding them in an image tag that sends an HTTP request to the attacker’s server. This enables silent, automatic data theft without malware or user interaction.
Below is the video PoC published by the researchers:
The researchers discovered the vulnerability during a security assessment on 05/06/25 and reported the flaw to the Google Security Team the same day.
Google quickly addressed the issue, collaborating with researchers to fix the RAG pipeline flaw that let malicious content be misinterpreted as instructions.
“GeminiJack demonstrates the evolving security landscape as AI systems become deeply integrated with organizational data. While Google has addressed this specific issue, the broader category of indirect prompt injection attacks against RAG systems requires continued attention from the security community.” concludes the report. “This vulnerability represents a fundamental shift in how we must think about enterprise security. “
Follow me on Twitter: @securityaffairs and Facebook and Mastodon
(SecurityAffairs – hacking, Google)
