Google developers have reported that the “ASCII character smuggling” issue in Gemini will not be fixed. Such an attack can be used to trick the AI assistant into providing users with false information, altering the model’s behavior, and covertly poisoning data.
Within an ASCII smuggling attack, characters from the Unicode Tags block are used to embed payloads that are invisible to users but can be detected and processed by large language models (LLMs). Essentially, this attack is similar to other techniques researchers have recently used against Google Gemini: they all rely on the gap between what a human sees and what the machine “reads” — whether it’s CSS manipulation or circumventing graphical interface restrictions.
Infosec researcher from FireTail Viktor Markopoulos tested ASCII smuggling against popular AI models and found that Gemini (via Calendar invites and emails), DeepSeek (via prompts), and Grok (via posts on X) are vulnerable to such attacks.
Claude, ChatGPT, and Microsoft Copilot, according to FireTail, were found to be better protected — they implement input sanitization.

As for Gemini, its integration with Google Workspace poses significant risks, as attackers can leverage the attack to embed hidden text into emails and Calendar invitations. Markopoulos discovered that it’s possible to hide malicious instructions in the invitation subject, ultimately altering the organizer’s details and injecting hidden meeting descriptions or links.

“If an LLM has access to a user’s mailbox, it’s enough to send an email with hidden commands, and the model will, for example, search for confidential messages in the victim’s inbox or forward the contact list, turning an ordinary phishing attack into a tool for autonomous data collection,” the expert adds, speaking about the email threat.
In addition, LLMs tasked with browsing websites may encounter hidden payloads (for example, in product descriptions) and retrieve malicious URLs to pass on to users.
An expert demonstrated that the attack can fool Gemini and force the assistant to provide users with false information. For example, in one case the researcher passed the AI an invisible instruction, which Gemini processed and then recommended to the victim a potentially malicious site where one could supposedly buy a good phone at a discount.
On September 18, 2025, a researcher informed Google engineers of their findings; however, the company rejected the report, stating that the issue is not a vulnerability and can only be used in the context of social engineering attacks.
The expert notes that other companies view such issues differently. For example, Amazon has published a detailed guide dedicated to the problem of Unicode character “smuggling.”