The simplest mitigation is also the least popular one: don't give the agent credentials in the first place. Scope it to read-only where possible, and treat every page it visits as untrusted input. But that limits what agents can do, which is why nobody wants to hear it.
I wonder if it'd be possible to train an LLM with such architecture: one input for the instructions/conversation and one "data-only" input. Training would ensure that the latter isn't interpreted as instructions, although I'm not knowledgeable enough to understand if that's even theoretically possible: even if the inputs are initially separate, they eventually mix in the neural network. However, I imagine that training could be done with massive amounts of prompt injections in the "data-only" input to penalize execution of those instructions.
The other type of attack would be what I would call "induced hallucinations", where the attacker crafts data not to get the LLM to do anything the data says, but to do what the attacker wants.
This is a common attack to demonstrate on neural network based image classifiers. Start with a properly classified image, and a desired incorrect classification. Then, introduce visually imperceptible noise until the classifier reports it as your target classification. There is no data/instruction confusion here: it is all data.
The core problem is that neural networks are fairly linear (which is what makes it possible to construct efficient hardware for them). They are, of course, not actually linear functions, but close enough to make linear algebra based attacks feasible.
It is probably better to think of this sort of attack in term of crypto analysis, which frequently exploits linearity in cryptosystems.
The depth of LLM networks make this sort of attack difficult; but I don't see any reason to think you can add enough layers to make it impossible. Particularly given that there is other research showing structure across layers, with groupings of layers having identifiable functionality. This means it is probably possible to reason about attacking individual layers like an onion.
This problem isn't really unique to AI either. Human written code has a tendency to be vulnerable to a similar attack, where maliciously crafted data can exploit the processor to do anything (e.g buffer overflow into arbitrary code execution).
However, you may immediately see how using same input space essentially relies on the model itself to do the judgement which can't be ultimately trusted
We learned so many years ago that separating code and data was important for security. It's such a huge step backwards that it's been tossed in the garbage.
We built a hotel listing page with a display:none injection ($189 listing with a hidden override to book $4,200) and tested six DOM extraction APIs via Chrome CDP. The split: innerText, Chrome Accessibility Tree, and Playwright's ARIA snapshot all filter it. textContent, innerHTML, and direct querySelector don't.
Then we audited the source code of all four major browser MCP tools: chrome-devtools-mcp (Google), playwright-mcp (Microsoft), chrome-cdp-skill, and puppeteer-mcp. Every single one defaults to a safe extraction method — accessibility tree or innerText. That's the good news.
The bad news: three out of four expose evaluate_script or eval commands that let the agent run arbitrary JS in the page context. When the accessibility tree doesn't return enough text (it often only gives headings and buttons), the agent's natural next step is textContent or innerHTML via eval. This is even shown as an example in the chrome-devtools-mcp docs.
Also: display:none is just the simplest technique. We tested opacity:0, font-size:0, and position:absolute left:-9999px — all three bypass even the safe defaults because the elements are technically "rendered" and accessible to screen readers. A determined attacker who knows you're using the accessibility tree can trivially switch to opacity-based hiding.
Works great with OpenClaw, Claude Cowork, or anything, really
OpenGuard is an OpenAI/Anthropic-compatible LLM proxy with middleware-style configuration for protocol-level inspections of the traffic that goes through it. Right now it has a small set of guards that is being actively expanded.
This is a Gemini deep research response that someone ran through some kind of shortening prompt. They even kept all the footnotes.
It used to be that startups would run blogs that did technical analysis, maybe talked a little market research, advanced the strategy of the business.
The good ones showed you how the leaders of the business thought, built trust and generated leads.
Now we have whatever this bullshit is. No evidence of human thought or experience, it's not even apparent what the objective of the piece is.
The prose is unbearably bad. Your brain just sort of slips on it. There's basically zero through line in this thing. a section ends, the next one begins, and it's not even clear what's under discussion.
One section starts "The clearest public descriptions landed between mid-2025 and early 2026." Descriptions of what? No clarity on this. Probably because it got "tersed" out.
At this point I feel like blogs are like lawn ornaments for startups. Even now, the sheer contempt for other people's time and attention is still a mild shock to me.
I actually assure you that no deep research from any of the provider was used to create the article itself, but I used a custom-built research pipeline for creating a dossier on promt injections as a starting point.
The article was intended as an overview of prompt injections with my prediction what will happen next in this space, which is a soft justification why the tools like OpenGuard are needed. I've spent multiple days iterating on the prose without an ill intent, mostly aiming to make it dense and informative to avoid wasting people's time, which I see backfired here.
I'm deeply sorry that it left such a bad taste despite my best effort, there's still a lot to learn for me.