News

ChatGPT Atlas and Perplexity Comet Are Vulnerable to Sidebar Spoofing

SquareX researchers discovered a vulnerability in the agentic AI browsers ChatGPT Atlas by OpenAI and Comet by Perplexity. The AI Sidebar Spoofing attack allows attackers to spoof the built-in AI assistant sidebar and deliver malicious instructions to users.

Atlas and Comet are next-generation AI browsers that integrate large language models directly into the sidebar. Users can ask the AI to analyze the page they’re on, execute commands, or launch automated tasks. The browsers also feature an agent mode, meaning the AI can independently make purchases, book tickets, fill out forms, and perform multi-step tasks. Comet was introduced in July 2025, and ChatGPT Atlas for macOS was released last week.

The gist of the sidebar spoofing attack is simple: a malicious extension injects JavaScript into the page and creates a fake sidebar on top of the real one. The fake looks identical to the original and intercepts all user interactions with the AI assistant. The user doesn’t notice the substitution and continues interacting with the assistant, which has already become malicious.

“Once the victim opens a new tab, the extension can create a fake sidebar indistinguishable from the real one,” SquareX explains.

It is emphasized that such an extension will need only the standard host and storage permissions — the same ones used by popular tools like Grammarly or password managers.

Researchers describe three theoretical scenarios for such attacks. In the first, a malicious AI assistant redirects the user to phishing pages when they ask about cryptocurrency. In the second, an OAuth attack is carried out via fake file-sharing apps, capturing access to Gmail and Google Drive. In the third scenario, a user trying to install some software receives from the “assistant” a command to install a reverse shell.

SquareX demonstrated the attack in practice using Google Gemini AI in Comet. They configured the parameters so that the AI would provide malicious instructions in response to certain queries. In real-world attacks, adversaries can use numerous trigger prompts, nudging victims toward a broad range of dangerous actions.

Initially, the researchers tested the attack only on Comet, since ChatGPT Atlas had not yet been released. However, after OpenAI’s browser came out, the specialists tested the attack on it as well, and the sidebar spoofing worked in this case too.

SquareX reached out to both companies, but representatives of Perplexity and OpenAI did not respond to the researchers.

Experts warn that users of agentic AI browsers should be aware of the risks and limit their use of such tools to the simplest tasks. In other words, it’s better to avoid handling email, finances, and other private data.

it? Share: