Perplexity Comet Flaw Exposed User Data to Attackers, Brave Reports

Perplexity Comet Flaw Exposed User Data to Attackers, Brave Reports

In brief

  • In a demo, Comet’s AI assistant followed embedded prompts and posted private emails and codes.
  • Brave says the vulnerability remained exploitable weeks after Perplexity claimed to have fixed it.
  • Experts warn that prompt injection attacks expose deep security gaps in AI agent systems.

Brave Software has uncovered a security flaw in Perplexity AI’s Comet browser that showed how attackers could trick its AI assistant into leaking private user data.

In a proof-of-concept demo published August 20, Brave researchers identified hidden instructions inside a Reddit comment. When Comet’s AI assistant was asked to summarize the page, it didn’t just summarize—it followed the hidden commands.

Perplexity disputed the severity of the finding. A spokesperson told Decrypt the issue “was patched before anyone noticed” and said no user data was compromised. “We have a pretty robust bounty program,” the spokesperson added. “We worked directly with Brave to identify and repair it.”

Brave, which is developing its own agentic browser, maintained that the flaw remained exploitable weeks after the patch and argued Comet’s design leaves it open to further attacks.

Brave said the vulnerability comes down to how agentic browsers like Comet process web content. “When users ask it to summarize a page, Comet feeds part of that page directly to its language model without distinguishing between the user’s instructions and untrusted content,” the report explained. “This allows attackers to embed hidden commands that the AI will execute as if they were from the user.”

Prompt injection: old idea, new target

This type of exploit is known as a prompt injection attack. Instead of tricking a person, it tricks an AI system by hiding instructions in plain text.

“It’s similar to traditional injection attacks—SQL injection, LDAP injection, command injection,” Matthew Mullins, lead hacker at Reveal Security, told Decrypt. “The concept isn’t new, but the method is different. You’re exploiting natural language instead of structured code.”

Security researchers have been warning for months that prompt injection could become a major headache as AI systems gain more autonomy. In May, Princeton researchers showed how crypto AI agents could be manipulated with “memory injection” attacks, where malicious information gets stored in an AI’s memory and later acted on as if it were real.

Even Simon Willison, the developer credited with coining the term prompt injection, said the problem goes far beyond Comet. “The Brave security team reported serious prompt injection vulnerabilities in it, but Brave themselves are developing a similar feature that looks doomed to have similar problems,” he posted on X.

Shivan Sahib, Brave’s vice president of privacy and security, said its upcoming browser would include “a set of mitigations that help reduce the risk of indirect prompt injections.”

“We’re planning on isolating agentic browsing into its own storage area and browsing session, so that a user doesn’t accidentally end up granting access to their banking and other sensitive data to the agent,” he told Decrypt. “We’ll be sharing more details soon.”

The bigger risk

The Comet demo highlights a broader problem: AI agents are being deployed with powerful permissions but weak security controls. Because large language models can misinterpret instructions—or follow them too literally—they’re especially vulnerable to hidden prompts.

“These models can hallucinate,” Mullins warned. “They can go completely off the rails, like asking, ‘What’s your favorite flavor of Twizzler?’ and getting instructions for making a homemade firearm.”

With AI agents being given direct access to email, files, and live user sessions, the stakes are high. “Everyone wants to slap AI into everything,” Mullins said. “But no one’s testing what permissions the model has, or what happens when it leaks.”

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.

Leave a Reply

Your email address will not be published. Required fields are marked *