OpenAI Admits AI Browsers Face Unsolvable Rapid Attacks
NEWNow you can listen to News articles!
Cybercriminals no longer always need malware or exploits to get into systems. Sometimes they just need the right words in the right place. OpenAI now openly acknowledges that reality. The company says that rapid injection attacks against artificial intelligence (AI)-powered browsers are not a bug that can be completely fixed, but rather a long-term risk that comes with allowing AI agents to roam the open web. This raises uncomfortable questions about how secure these tools really are, especially as they gain more autonomy and access to their data.
Sign up to receive my FREE CyberGuy report
Get my best tech tips, urgent security alerts, and exclusive offers delivered right to your inbox. Plus, you’ll get instant access to my Ultimate Guide to Surviving Scams, free when you join me CYBERGUY.COM information sheet.
NEW MALWARE CAN READ YOUR CHATS AND STEAL YOUR MONEY

AI-powered browsers can read and act on web content, which also makes them vulnerable to hidden instructions that attackers can insert into pages or documents. (Kurt “CyberGuy” Knutsson)
Why doesn’t rapid injection go away?
In a recent blog post, OpenAI admitted that fast injection attacks are unlikely to be completely eliminated. Fast injection works by hiding instructions within web pages, documents or emails in ways that humans don’t notice, but AI agents do. Once the AI reads that content, it can be tricked into following malicious instructions.
OpenAI compared this problem to scams and social engineering. You can reduce them, but you can’t make them disappear. The company also acknowledged that “agent mode” in its ChatGPT Atlas browser increases risk because it expands the attack surface. The more an AI can do for you, the more damage it can cause when something goes wrong.
OpenAI launched the ChatGPT Atlas browser in October and security researchers immediately began testing its limits. Within hours, demos appeared showing that a few carefully placed words within a Google Document could influence browser behavior. That same day, Brave published its own warning, explaining that indirect injection is a structural problem for AI-powered browsers, including tools like Perplexity’s Comet.
This is not just an OpenAI problem. Earlier this month, the UK’s National Cyber Security Center warned that rapid injection attacks against generative AI systems may never be fully mitigated.
FALSE AI CHAT RESULTS ARE SPREADING DANGEROUS MAC MALWARE

Fast injection attacks exploit trust at scale, allowing malicious instructions to influence what an AI agent does without the user seeing it. (Kurt “CyberGuy” Knutsson)
Offsetting risks with AI browsers
OpenAI says it views rapid injection as a long-term security challenge that requires constant pressure, not a one-size-fits-all solution. Their approach is based on faster patch cycles, continuous testing, and layered defenses. This puts it broadly in line with rivals such as Anthropic and Google, which have argued that agent systems need architectural checks and continuous stress testing.
Where OpenAI is taking a different approach is with something it calls an “LLM-based automated attacker.” In simple terms, OpenAI trained an AI to act like a hacker. Using reinforcement learning, this attacking bot looks for ways to introduce malicious instructions into an AI agent’s workflow.
The robot first executes the attacks in simulation. Predict how the target AI would reason, what steps it would take, and where it might fail. Based on that feedback, refine the attack and try again. Because this system has knowledge of the AI’s internal decision-making, OpenAI believes it can expose weaknesses faster than real-world attackers.
Even with these defenses, AI browsers are not secure. They combine two things attackers love: autonomy and access. Unlike regular browsers, they not only display information, but also read emails, scan documents, click on links, and perform actions on your behalf. That means a single malicious message hidden in a web page, document, or message can influence what the AI does without you seeing it. Even when safeguards are in place, these agents operate by trusting content at scale, and that trust can be manipulated.
THIRD-PARTY BREACH EXPOSES CHATGPT ACCOUNT DETAILS

As AI browsers gain more autonomy and access to personal data, limiting permissions and keeping human confirmation informed becomes critical for security. (Kurt “CyberGuy” Knutsson)
7 steps you can take to reduce risk with AI browsers
You may not be able to eliminate rapid injection attacks, but you can significantly limit their impact by changing the way you use AI tools.
1) Limit what the AI browser can access
Only give an AI browser access to what is absolutely necessary. Avoid connecting your primary email account, cloud storage, or payment methods unless there is a clear reason. The more data an AI can see, the more valuable it will be to attackers. Limiting access reduces the blast radius if something goes wrong.
2) Require confirmation for every sensitive action
Never allow an AI browser to send emails, make purchases, or change account settings without asking you first. Confirmation breaks long chains of attacks and gives you a moment to detect suspicious behavior. Many fast injection attacks rely on AI acting silently in the background without user review.
3) Use a password manager for all accounts.
A password manager ensures that each account has a unique, secure password. If an AI browser or malicious page leaks a credential, attackers cannot reuse it elsewhere. Many password managers also refuse to auto-fill unknown or suspicious sites, which can alert you that something is wrong before you enter something manually.
Next, check to see if your email has been exposed in previous breaches. Our number one password manager (see Cyberguy.com) includes a built-in breach scanner that checks to see if your email address or passwords have appeared in known breaches. If you discover a match, immediately change any reused passwords and protect those accounts with new, unique credentials.
Check out the best expert-reviewed password managers of 2025 at Cyberguy.com
4) Run powerful antivirus software on your device
Even if an attack starts within the browser, antivirus software can still detect suspicious scripts, unauthorized system changes, or malicious network activity. Powerful antivirus software focuses on behavior, not just files, which is critical when it comes to AI-powered or script-based attacks.
The best way to protect yourself from malicious links that install malware and potentially access your private information is to have powerful antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.
Get my picks for the best antivirus protection winners of 2025 for your Windows, Mac, Android, and iOS devices at Cyberguy.com
5) Avoid broad or open-ended instructions
Telling an AI browser to “handle whatever it takes” gives attackers room to manipulate it through hidden prompts. Be specific about what AI can do and what it should never do. Strict instructions make it difficult for malicious content to influence the agent.
6) Be careful with AI summaries and automated scans
When an AI browser scans emails, documents, or web pages for you, remember that hidden instructions may be lodged within that content. Treat AI-generated actions as drafts or suggestions, not final decisions. Review anything the AI plans to act on before approving it.
7) Keep your browser, artificial intelligence tools and operating system updated.
Security fixes for AI browsers are evolving rapidly as new attack techniques emerge. Delaying updates leaves known weaknesses open longer than necessary. Turning on automatic updates ensures that you get protection as soon as they are available, even if you miss the ad.
CLICK HERE TO DOWNLOAD THE News APP
Kurt’s Key Takeaway
There has been a meteoric rise in AI browsers. We’re now seeing them at major tech companies, including OpenAI’s Atlas, The Browser Company’s Dia, and Perplexity’s Comet. Even existing browsers such as Chrome and Edge are striving to add artificial intelligence and agent features to their current infrastructure. While these browsers can be useful, the technology is still in its infancy. It is better not to be fooled by the hype and wait for them to mature.
Do you think it’s worth the risk to use AI browsers today or are they moving faster than security can keep up? Let us know by writing to us at Cyberguy.com
Sign up to receive my FREE CyberGuy report
Get my best tech tips, urgent security alerts, and exclusive offers delivered right to your inbox. Plus, you’ll get instant access to my Ultimate Guide to Surviving Scams, free when you join me CYBERGUY.COM information sheet.
Copyright 2025 CyberGuy.com. All rights reserved.
Kurt “CyberGuy” Knutsson is an award-winning technology journalist with a deep love for technology, gear and devices that improve lives with his contributions to News and News Business since mornings on “News & Friends.” Do you have any technical questions? Get Kurt’s free CyberGuy newsletter, share your voice, a story idea or comment on CyberGuy.com.


