Perplexity's Comet browser just handed the AI browser market a brutal reality check. If an agentic browser can be tricked by a calendar invite into touching local files, then the real product is not convenience. It is a new attack surface.
What researchers found
Zenity Labs disclosed a Comet vulnerability that let attacker controlled content hijack the browser agent's intent during a routine workflow. Their writeup says a seemingly normal calendar invite could push the browser into accessing local files and silently exfiltrating data, all without explicit user approval.
The company says the issue was fixed before public disclosure. Good. But the patch does not erase the bigger point. The weakness was not just one bug in one product. It was a design problem in how agentic browsers mix untrusted content, privileged sessions, and autonomous action.
Why agentic browsers are different
Normal browsers are messy, but their security model is old and heavily constrained. They do not usually read a page, interpret hidden instructions, decide what matters, and then go act on your behalf across files, tabs, calendars, and authenticated services.
Agentic browsers do exactly that. That is why the risk profile changes so fast. The moment you let a browser operate like a junior assistant with your sessions and files in reach, every webpage becomes part interface and part prompt injection battlefield.
This is the part many AI browser demos skip. They sell magic. Security teams inherit the horror movie.
The real issue is trust collapse
Zenity described the core problem as insufficient isolation between user commands and untrusted input. That sounds technical, but the practical meaning is simple: the browser could not reliably tell the difference between what the user wanted and what the attacker wanted.
That is deadly in agentic software. Once the boundary blurs, the agent starts treating hostile content like legitimate workflow context. A calendar invite stops being an invite. It becomes an execution path.
And this is not just about local files. If the same trust model extends into password managers, developer tools, internal dashboards, or browser sessions, you are not looking at a quirky browser bug. You are looking at a privilege escalation machine.
What this means for AI product builders
Every team building AI operators should stop pretending that consent screens alone solve this. They do not. Security has to be architectural. Sensitive actions need hard permission gates. Local files need explicit isolation. Cross tool execution needs visibility and policy controls. And agents need fewer silent assumptions about user intent.
The current market is rushing toward AI browsers because they look like the next wedge product for consumer AI. Fair enough. But the winners will not be the ones that automate the most clicks. They will be the ones that prove they can survive hostile input without betraying the user.
If Comet's scare does anything useful, it should kill the lazy idea that agentic UX can be bolted onto a browser and secured later.
Where this goes next
Expect more disclosures like this, not fewer. Researchers are starting to test agentic products the way they already test cloud software, mobile apps, and browsers. That pressure is healthy.
For users, the near term rule is simple. Treat AI browsers like high privilege tools, not toys. Do not give them broad filesystem access unless you really understand the tradeoff. For builders, the rule is even simpler. Design around adversarial content from day one or prepare to get embarrassed in public.
OpenClaw is open-source and free to run yourself. If you want managed hosting with monitoring and updates, check out OpenClawHosting.
FAQ
What happened with Perplexity Comet?
Researchers showed that Perplexity's Comet browser could be manipulated by malicious content embedded in a calendar invite. The exploit could trigger local file access and data exfiltration without clear user approval.
Was the Comet vulnerability fixed?
According to Zenity Labs, yes. The issue was fixed before public disclosure. That reduces immediate user risk, but it does not remove the broader design lessons for agentic browsers.
Why are AI browsers riskier than normal browsers?
AI browsers combine privileged sessions, autonomous actions, and interpretation of untrusted content. That mix creates new failure modes where hostile content can influence what the browser does on the user's behalf.
Related reads: ChatGPT browser fingerprinting, OpenAI shutting down Sora, and Apple opening Siri to third party AI.