AI Browsers Face Unavoidable Security Flaw: Prompt Injection

fahd.zafar • October 28, 2025

AI-powered browsers with agentic capabilities are introducing a fundamental security vulnerability that experts believe may never be fully resolved: prompt injection attacks.

What Is Prompt Injection?

Prompt injection occurs when unintended text becomes commands for an AI system. Direct injection happens when malicious text enters at the prompt input point, whilst indirect injection occurs when content the AI is processing—such as web pages or PDFs—contains hidden commands that the AI executes as if they were legitimate user instructions.



Real-World Vulnerabilities Discovered

Research from Brave browser last week detailed indirect prompt injection vulnerabilities in the Comet and Fellou browsers. Testers successfully embedded instructions as hidden text within images and web pages. When users asked the browsers to summarise these pages, the AI followed the malicious instructions—opening Gmail, extracting subject lines from recent emails, and transmitting that data to attacker-controlled websites.


Security researcher Johann Rehberger demonstrated that OpenAI's Atlas browser could be manipulated to change settings by placing instructions at the bottom of online Word documents. Another researcher successfully got Atlas to respond with "Trust no AI" instead of actually summarising a Google document's contents.


OpenAI's Chief Information Security Officer Dane Stuckey acknowledged the challenge: "Prompt injection remains a frontier, unsolved security problem. Our adversaries will spend significant time and resources to find ways to make ChatGPT agent fall for these attacks."



Additional Attack Vectors

Recent discoveries have revealed even more concerning vulnerabilities. Researchers demonstrated that Atlas can be fooled through direct prompt injection by pasting invalid URLs containing malicious prompts into the browser's address bar. This creates phishing scenarios where users unknowingly authorise data sharing or file deletion.


Separate research identified cross-site request forgery vulnerabilities affecting Atlas and other browsers. When users visit sites with malicious code whilst logged into ChatGPT, those sites can send commands to the AI as if they were the authenticated user. This issue affects ChatGPT's memory system, persisting across devices and sessions.



Web-Based Chatbots Also Vulnerable

AI browsers aren't alone in facing these threats. The underlying chatbots are equally susceptible. Testing revealed that some chatbots can be tricked into following hidden instructions on web pages, even poisoning future interactions. In one example, a malicious prompt successfully instructed chatbots to add two to all mathematical calculations going forward—creating persistent errors that continued throughout the chat session.


Different AI systems showed varying levels of resistance. Microsoft Copilot and Claude demonstrated better detection of injection attempts, whilst others like Gemini and Perplexity proved more vulnerable to certain attack types.



Why This Problem May Be Unsolvable

Security experts believe prompt injection may be fundamentally unsolvable. The core issue lies in the basic architecture of AI systems: when these systems are designed to process untrusted external data and incorporate it into queries, that data inevitably influences the output in ways that can be exploited.


The challenge isn't a specific bug that can be patched—it's an inherent characteristic of how AI systems function. As long as AI models process text from potentially malicious sources and can influence actions based on that content, methods will exist to manipulate their behaviour. This makes prompt injection more comparable to an entire class of security vulnerabilities rather than an individual flaw with a straightforward fix.



The Agentic AI Amplification

The danger intensifies as AI becomes more agentic—gaining ability to act on behalf of users. AI-powered browsers can now open web pages, plan trips, and create lists autonomously. Google recently announced its Agents Payments Protocol, designed to allow AI agents to make purchases on users' behalf, even whilst they sleep.


AI systems increasingly access sensitive data including emails, files, and code repositories. Microsoft's Copilot Connectors grant the Windows-based agent permissions for Google Drive, Outlook, OneDrive, and Gmail. ChatGPT also connects to Google Drive.


The implications are serious: malicious prompt injection could potentially instruct AI to delete files, add malicious content, or send phishing emails from users' accounts—all without their knowledge or consent.



Mitigation Strategies

Whilst elimination may be impossible, experts suggest several approaches to minimise risk:


AI vendors should assign bots minimal privileges, require human consent for every action, and restrict content ingestion to vetted domains. Systems should treat all content as potentially untrustworthy, quarantine instructions from unvetted sources, and deny instructions that clash with apparent user intent.


Security controls must be applied downstream of AI output, including limiting capabilities, restricting access to private data, implementing sandboxed code execution, applying least privilege principles, and maintaining human oversight with comprehensive monitoring and logging.



Training Data Poisoning

Even if prompt injection were solved, AI systems face another threat: training data poisoning. Recent research from Anthropic demonstrated that just 250 malicious documents in a training corpus—potentially as simple as publishing them online—can create backdoors in AI models. Whilst the study focused on triggering nonsense output, the same technique could theoretically instruct models to delete files or exfiltrate data to attackers.



Risk Versus Reward

The fundamental question remains: is the convenience worth the risk? As agentic AI becomes embedded in operating systems and everyday tools, users may lack choice in exposure to these vulnerabilities.


The safest approach involves limiting AI empowerment to act autonomously and restricting the external data fed to these systems. The more capabilities AI agents possess and the more untrusted content they process, the greater the attack surface becomes.


Prompt injection represents an inherent security challenge in AI systems designed to process untrusted input and take autonomous actions. As these capabilities expand, organisations and individuals must carefully weigh convenience against the growing security risks.



Secure Your AI Integration

At Altiatech, we help organisations assess and mitigate risks associated with emerging technologies including AI systems. Our cybersecurity services can evaluate your AI tool usage, implement appropriate security controls, and develop policies that balance innovation with security.



Get in touch:

📧 Email: innovate@altiatech.com
📞 Phone (UK): +44 (0)330 332 5482


Innovation with security. Technology with control.

Ready to move from ideas to delivery?


Whether you’re planning a cloud change, security uplift, cost governance initiative or a digital delivery programme, we can help you shape the scope and the right route to market.


Email:
innovate@altiatech.com or call 0330 332 5842 (Mon–Fri, 9am–5:30pm).


Main contact page: https://www.altiatech.com/contact

A grid of dark gray squares, each with a person icon, featuring one bright blue square in the center.
By Simon Poole April 1, 2026
Explains how to configure break glass accounts in Microsoft Entra ID correctly, reducing risk and ensuring secure emergency access when standard controls fail.
A person holds a blue external hard drive connected by a cable to a laptop displaying a login screen.
By Simon Poole March 18, 2026
A practical guide to Microsoft Entra ID hardening and privileged access, with steps to reduce identity risk, strengthen controls, and improve security posture.
A hand clicks a computer mouse, connecting two digital bank icons with a glowing globe showing various currency symbols.
By Simon Poole March 13, 2026
Explores how open banking is scaling across the UAE and GCC and why strong API security and consent controls are essential for compliance, trust, and resilience.
Person holding a phone with a lock icon, using a laptop; digital security concept.
By Simon Poole March 11, 2026
A practical guide to reducing cyber risk exposure fast as geopolitical tensions rise, with clear steps to strengthen resilience, controls, and response.
A person points to an AI interface with glowing circuits, overlaid on a blue background.
By Simon Poole March 4, 2026
Explains how PPN 017 will shape AI procurement in the UK public sector and the questions buyers are likely to ask suppliers about governance, risk, and compliance.
Person using a calculator with a tablet on a wooden table.
By Wafik Rozeik February 25, 2026
Examines AI-augmented attacks targeting FortiGate devices at scale, what the risks mean for organisations, and the immediate steps to strengthen security.
Digital, pixelated person with red data streams, facing forward. Cyberpunk, data glitch effect.
By Simon Poole February 24, 2026
Examines AI-augmented attacks targeting FortiGate devices at scale, what the risks mean for organisations, and the immediate steps to strengthen security.
Person typing on laptop, cloud computing displayed on the screen, on a wooden table.
By Wafik Rozeik February 23, 2026
Explains why AI spend behaves differently and how anomaly management is becoming essential in FinOps to control costs, reduce risk, and improve cloud visibility.
Hand holding a phone displaying the Microsoft Copilot logo with the Microsoft logo blurred in the background.
By Simon Poole February 18, 2026
A practical governance checklist for Microsoft Copilot in 2026, using the Copilot Control System to manage risk, security, compliance, and oversight.
Route to market diagram: Bank to delivery platform, with steps like product mgmt and customer support.
By Simon Poole February 12, 2026
Explains what the Technology Services 4 (TS4) framework means for public sector buyers and how to procure Altiatech services through compliant routes.