News

Researchers tricked the Comet AI browser into buying fake products

Guardio researchers tested a browser with an AI agent and concluded that it is vulnerable to both old and new attack techniques that can coerce it into interacting with malicious pages and prompts.

AI agents in browsers are capable of autonomously surfing the web, making purchases, and managing various online tasks (for example, processing email, booking tickets, filling out forms, and so on).

At present, the primary example of a browser with a built-in AI agent is Comet, developed by Perplexity, and it is the focus of Guardio’s research. However, it’s worth noting that Microsoft Edge also includes agentic AI features (via integration with Copilot), and OpenAI is currently developing its own platform for these tasks under the codename Aura.

Although for now these tools are mostly geared toward enthusiasts, Comet is quickly entering the mass consumer market. And researchers from Guardio warn that such solutions are not adequately protected against known and novel attacks crafted specifically to target them.

As tests have shown, browsers with AI agents can be vulnerable to phishing and prompt injections, and may even make purchases in fake online stores.

For example, in one of the tests the analysts asked Comet to buy an Apple Watch on a fake Walmart website that the researchers themselves created using the Lovable service. Although in the experiment Comet was directed straight to the bogus store, it’s noted that in real life an AI agent could end up in the same situation due to malicious ads, black-hat SEO, and other factors.

As a result, the AI scanned the fake site and then, without verifying its legitimacy, proceeded to checkout and automatically entered the bank card details and address, completing the purchase without asking the user for confirmation.

In the second test, the researchers created a fake email purportedly from Wells Fargo, embedded a link to a real phishing page, and sent it from a ProtonMail address. Comet treated the incoming message as a legitimate email from the bank, followed the phishing link, loaded the fake Wells Fargo login page, and prompted the user to enter their credentials on it.

In the third test, aimed at evaluating resistance to prompt injections, the researchers created a fake CAPTCHA page and used the classic ClickFix attack, but supplemented it with hidden instructions for the AI agent embedded in the page’s code. As a result, Comet interpreted these hidden instructions as legitimate commands and clicked the CAPTCHA button, thereby triggering the download of a malicious file.

Guardio emphasizes that these tests are just the tip of the iceberg of problems caused by the emergence of agentic AI browsers. Moreover, in the future such threats could supplant conventional human-targeted attack models.

“In the era of AI vs. AI, scammers don’t need to deceive millions of different people; it’s enough to break a single AI model,” the experts write. “Once they do, that exploit can be scaled indefinitely. Since attackers have access to the same models, they can ‘train’ their malicious AI against the victim’s AI until the scam works flawlessly.”

Experts conclude that, for now, agentic AI in browsers is still too immature, and it’s not recommended to entrust it with important tasks such as banking, shopping, or accessing email accounts. Users are also advised to avoid giving AI agents credentials, financial details, and personal information — it’s safer to enter this data manually.

it? Share: