
Summary
- The release of ChatGPT Atlas exposed vulnerabilities in AI browsers including Comet and Fellou, according to a report from Brave.
- The flaws allow websites to execute hidden commands through invisible images or text, compromising data and accounts.
- OpenAI adopted the Guardrails framework to mitigate risks, but experts warn that security is still insufficient given the advancement of agent automation.
The launch of ChatGPT Atlas, a new browser from OpenAI aimed at automating tasks, brought to light a series of concerns about security in browsers with artificial intelligence. A few days after the tool’s debut for macOS, researchers from Brave Software — a company known for its privacy-focused browser — released a report revealing critical vulnerabilities in this type of technology.
The flaws affect not only ChatGPT Atlas, but also other AI browsers, such as Comet (from Perplexity) and Fellou, demonstrating that the problem is systemic. According to experts, a security breach known as prompt injection can allow hackers to execute hidden commands on web pages, compromising files, passwords and bank accounts.
How the faults found work?
The Brave team has identified different forms of attacks involving the misuse of commands embedded in visual or textual content. In the case of Comet, for example, almost invisible instructions can be hidden in website images. When the user takes a screenshot and asks the browser to analyze the image, the system can interpret the hidden text as a legitimate order, allowing the attacker to perform actions remotely.
“The security vulnerability we found in the Comet browser […] it is not an isolated problem. Indirect prompt injections are a systemic issue facing Comet and other AI-powered browsers,” Brave warned in a post on X.
A similar scenario was observed in Fellou, where simply accessing a malicious website was enough for the browser to process instructions hidden in the page’s content. This happens because some browsers automatically send page texts to the AI language model — without distinguishing what comes from the user from what comes from the website.
According to researchers, this type of vulnerability breaks traditional web security foundations, such as the same origin policy (same-origin policy). This is because AI browsers operate with the same authenticated privileges as the user — that is, a simple disguised command can have access to bank accounts, corporate emails or confidential data.
“If you are logged into sensitive accounts, such as your bank or email provider, in your browser, simply summarizing a Reddit post could allow an attacker to steal money or your private data,” the experts explained in the report.
Brave highlighted that, until structural improvements are implemented, the so-called agentic browsing – navigation assisted by AI agents —–will remain intrinsically unsafe. The recommendation is that browsers keep these functions isolated from common browsing, requiring explicit confirmation from the user before any sensitive action.


Repercussion on the market
OpenAI, in turn, had already implemented the Guardrails security framework, launched on October 6th alongside AgentKit, a set of tools aimed at developers of AI agents. The measure seeks to reduce the risk of abuse, but experts say there is still no definitive solution to prompt injections.
Cybersecurity company HiddenLayer reinforced the seriousness of the problem: even a seemingly harmless chatbot can be tricked into opening private documents, sending emails or accessing sensitive data.
It is worth mentioning that, with the advancement of systems such as ChatGPT Atlas, Comet and Fellou, the need for more robust protection protocols grows — especially given the ability of these tools to act autonomously.
Until universal measures are implemented, experts recommend caution: avoid using these browsers for sensitive activities and keep two-step authentication activated on all accounts. This is because while it can facilitate daily tasks, artificial intelligence expands the digital attack surface.
Source: https://tecnoblog.net/noticias/chatgpt-atlas-levanta-alertas-sobre-seguranca-em-navegadores-com-ia/
