The server was deserializing untrusted input from the client directly into module+export name lookups, and then invoking whatever the client asked for (without verifying that metadata.name was an own property).
return moduleExports[metadata.name]
We can patch hasOwnProperty and tighten the deserializer, but there is deeper issue. React never really acknowledged that it was building an RPC layer. If you look at actual RPC frameworks like gPRC or even old school SOAP, they all start with schemas, explicit service definitions and a bunch of tooling to prevent boundary confusion. React went the opposite way: the API surface is whatever your bundler can see, and the endpoint is whatever the client asks for.My guess is this won't be the last time we see security fallout from that design choice. Not because React is sloppy, but because it’s trying to solve a problem category that traditionally requires explicitness, not magic.
The fact that React embodies an RPC scheme in disguise is quite obvious if you look at the kind of functionality that is implemented, some of that simply can not be done any other way. But then you should own that decision and add all of the safeguards that such a mechanism requires, you can't bolt those on after the fact.
A similar bug could be introduced in the implementation of other RPC systems too. It's not entirely specific to this design.
(I contribute to React but not really on RSC.)
Imagine these dozens of people, working at Meta.
They sit at the table, they agree to call eval() and not think "what could go wrong"
If I had a dollar for every time a serious vulnerability that started like this was discovered in the last 30 years...
Building a private, out of date repo doesn't seem great either.
The problem is this specific "call whatever server code the client asks" pattern. Traditional APIs with defined endpoints don’t have that issue.
> My guess is this won't be the last time we see security fallout from that design choice. Not because React is sloppy, but because it’s trying to solve a problem category that traditionally requires explicitness, not magic.
Now I'm worried, but I don't use React. So I will have to ask: how does SvelteKit fares in this aspect?
The vast majority of developers do not update their frameworks to the latest version so this is something that will linger on for years. Particularly if you're on Next something-like-12 and there's breaking changes in order to go to 16 + patch.
OTOH this is great news for bad actors and pentesters.
i've been thinking basically this for so long, i'm kinda happy to be validated about this lol
console.log("jsjs")
}> A pre-authentication remote code execution vulnerability exists in React Server Components versions 19.0.0, 19.1.0, 19.1.1, and 19.2.0 including the following packages: react-server-dom-parcel, react-server-dom-turbopack, and react-server-dom-webpack. The vulnerable code unsafely deserializes payloads from HTTP requests to Server Function endpoints.
React's own words: https://react.dev/blog/2025/12/03/critical-security-vulnerab...
> React Server Functions allow a client to call a function on a server. React provides integration points and tools that frameworks and bundlers use to help React code run on both the client and the server. React translates requests on the client into HTTP requests which are forwarded to a server. On the server, React translates the HTTP request into a function call and returns the needed data to the client.
> An unauthenticated attacker could craft a malicious HTTP request to any Server Function endpoint that, when deserialized by React, achieves remote code execution on the server. Further details of the vulnerability will be provided after the rollout of the fix is complete.
I think deserialization that relying on blacklists of properties is a dangerous game.
I think rolling your own object deserialization in a library that isn’t fully dedicated to deserialization is about as dangerous as writing your own encryption code.
the one problem I haven't understood is how it manages to perform a second call afterwards, as only being able to call Function constructor doesn't really amount to much (still a serious problem though)
Intentionally? That's a scary feature
It's RPC. Remote procedure calls. An approach that has made a comeback in the front-end space recently. There was tRPC; then react made a splash with the release of its server components; then other frameworks started emulating the approach. I think Svelte now has something similar with its "remote functions". And Solid has been working on something similar; so that SolidStart now has a "use server" pragma. They probably don't replicate React's protocol; but the idea of calling functions on the server is similar.
whatever monstrosity hides underneath these nice high-level TypeScript frameworks to make all of it happen in JS, usually that's the worrying part
What does server components do so much better than SSR? What minute performance gain is achieved more than client side rendering?
Why won’t they invest more on solving the developer experience that took a nosedive when hooks were introduced? They finally added a compiler, but instead of going the svelte route of handling the entire state, it only adds memoization?
If I can send a direct message to the react team it would be to abandon all their current plans, and work on allowing users to write native JS control flows in their component logic.
sorry for the rant.
I like to think of Server Components as componentized BFF ("backend for frontend") layer. Each piece of UI has some associated "API" with it (whether REST endpoints, GraphQL, RPC, or what have you). Server Components let you express the dependency between the "backend piece" and the "frontend piece" as an import, instead of as a `fetch` (client calling server) or a <script> (server calling client). You can still have an API layer of course, but this gives you a syntactical way to express that there's a piece of backend that prepares data for this piece of frontend.
This resolves tensions between evolving both sides: the each piece of backend always prepares the exact data the corresponding piece of frontend needs because they're literally bound by a function call (or rather JSX). This also lets you load data as granularly as you want without blocking (very nice when you have a low-latency data layer).
Of course you can still have a traditional REST API if you want. But you can also have UI-specific server computation in the middle. There's inherent tension between the data needed to display the UI (a view model) and the way the data is stored (database model); RSC gives you a place to put UI-specific logic that should execute on the backend but keeps composability benefits of components.
I understand the logic, but there are several issues I can think of.
1 - as I said, SSR and API layers are good enough, so investing heavily in RSC when the hooks development experience is still so lacking seems weird to me. React always hailed itself as the “just JS framework”, but you can’t actually write regular JS in components since hooks have so many rules that bind the developer in a very specific way of writing code.
2 - as react was always celebrated as an unopinionated framework, RSC creates a deep coupling between 2 layers which were classically very far apart.
Here are a list of things that would rather have react provide:
- advanced form functionality that binds to model, and supports validation
- i18n, angular has the translations compiled into the application and fetching a massive json with translations is not needed
- signals, for proper reactive state
- better templating ability for control flows
- native animation library
All of these things are important so I wouldn’t have to learn each new project’s permutation of the libraries de jour.
We'll see if the magic can be trusted on or if we need more explicit solutions to this, but the Next/RSC experience was vastly superior compared to writing another REST API that is never to be used with anything else than the accompanied React app, and I'd love to use it or something similar to it in the future.
The reason is probably that a REST API for a "BFF" is in many cases quite tightly coupled with the frontend, and trying to detach those in the system architecture does not separate them in some higher scheme of things. Even if the two parts could separated but would never end up used without another, the separation probably just makes an unnecessary barrier.
I really do want a good frontend framework that lets me expressively build and render dynamic frontend components, but it feels like 99% of React's development in the last few years has been just been creating churn and making that core frontend experience worse and worse. Hooks solve challenges around sharing component meta-functionality but then end up far worse for all other non-trivial cases, and it seems like RSC & concurrency just break things and add constraints instead of improving any part of my existing experience.
I guess this is cool if you're building mega-projects, but it makes React actively painful to use for anything smaller. I still use it every day, but as soon as I find a good off-ramp for my product (something similar, but simpler) I will take it. Moving towards Preact & signals currently seems like the best option for existing projects so far as I can tell.
I agree that the developer experience provided by the compiler model used in Svelte and React is much nicer to work with
And what they DO add? Only things that improve dev exp
IMO, a big part of it is the lack of competition (in approach) exacerbated by the inability to provide alternatives due to technical/syntactical limitations of JavaScript itself.
Vue, Svelte, Angular, Ripple - anything other than React-y JSX based frameworks require custom compilers, custom file-types and custom LSPs/extensions to work with.
React/JSX frameworks have preferential treatment with pre-processors essentially baking in a crude compile time macro for JSX transformations.
Rust solved this by having a macro system that facilitated language expansion without external pre-processors - e.g. Yew and Leptos implement Vue-like and React-like patterns, including support for JSX and HTML templating natively inside standard .rs files, with standard testing tools and standard LSP support;
https://github.com/leptos-rs/leptos/blob/main/examples/count...
https://github.com/yewstack/yew/blob/master/examples/counter...
So either the ECMAScript folks figure out a way to have standardized runtime & compilable userland language extensions (e.g. macros) or WASM paves the way for languages better suited to the task to take over.
Neither of these cases are likely, however, so the web world is likely destined to remain unergonomic, overly complex and slow - at least for the next 5 - 10 years.
In my opinion, the core functionality of React (view rendering) is actually good and is why it cannot be unseated.
I remember looking for a DOM library:
- dojo: not for me
- prototype.js: not for me
- MooTools: not for me
- jQuery: something I liked finally
Well, guess what library won. After I adopted jQuery, I completely stopped looking for other DOM libraries.
But I still needed a template rendering library:
- Mustache.js: not for me
- Handlebars.js: not for me
- Embedded JavaScript Templates: not for me
- XML with XSLT: not for me
- AngularJS: really disliked it SOO much*
- Knockout.js: not for me
- Backbone.js with template engine: not for me and actually it was getting popular and I really wished it would just go away at the time**
- React: something I actually liked
You must remember that when React came out, you needed a JSX transpiler too, at a time when few people even used transpilers. This was a far bigger obstacle than these days IMO.
Which leads to my hot take: core React is just really good. I really like writing core React/JSX code and I think most people do too. If someone wrote a better React, I don’t think the problem you mentioned would hamper adoption.
The problems come when you leave React’s core competency. Its state management has never been great. Although not a React project itself, I hated Redux (from just reading its docs). I think RSC at the current moment is a disaster — so many pain points.
I think that’s where we are going to see the next innovation. I don’t think anyone is going to unseat React or JSX itself for rendering templates. No one unseated jQuery for DOM manipulation — rather we just moved entirely away from DOM manipulation.
*I spent 30 minutes learning AngularJS and then decided “I’m never going to want to see this library again.” Lo and behold they abandoned their entire approach and rewrote Angular for v2 so I guess I was right.
**It went away and thankfully I avoided having to ever learn Backbone.js.
RSC is their solution to not being able to figure out how to make SSR faster and an attempt to reduce client-side bloat (which also failed)
https://www.arrow-js.com/docs/
It makes it easy to have a central JSON-like state object representing what's on the page, then have components watch that for changes and re-render. That avoids the opaqueness of Redux and promise chains, which can be difficult to examine and debug (unless we add browser extensions for that stuff, which feels like a code smell).
I've also heard heard good things about Astro, which can wrap components written in other frameworks (like React) so that a total rewrite can be avoided:
https://docs.astro.build/en/guides/imports/
I'm way outside my wheelhouse on this as a backend developer, so if anyone knows the actual names of the frameworks I'm trying to remember (hah), please let us know.
IMHO React creates far more problems than it solves:
- Virtual DOM: just use Facebook's vast budget to fix the browser's DOM so it renders 1000 fps using the GPU, memoization, caching, etc and then add the HTML parsing cruft over that
- Redux: doesn't actually solve state transfer between backend and frontend like, say, Firebase
- JSX: do we really need this when Javascript has template literals now?
- Routing: so much work to make permalinks when file-based URLs already worked fine 30 years ago and the browser was the V in MVC
- Components: steep learning curve (but why?) and they didn't even bother to implement hooks for class components, instead putting that work onto users, and don't tell us that's hard when packages like react-universal-hooks and react-hookable-component do it
- Endless browser console warnings about render changing state and other errata: just design a unidirectional data flow that detects infinite loops so that this scenario isn't possible
I'll just stop there. The more I learn about React, the less I like it. That's one of the primary ways that I know that there's no there there when learning new tools. I also had the same experience with the magic convention over configuration in Ruby.What's really going on here, and what I would like to work on if I ever win the internet lottery (unlikely now with the arrival of AI since app sales will soon plummet along with website traffic) is a distributed logic flow. In other words, a framework where developers write a single thread of execution that doesn't care if it's running on backend or frontend, that handles all state synchronization, preferably favoring a deterministic fork/join runtime like Go over async behavior with promise chains. It would work a bit like a conflict-free replicated data type (CRDT) or software transactional memory (STM) but with full atomicity/consistency/isolation/durability (ACID) compliance. So we could finally get back to writing what looks like backend code in Node.js, PHP/Laravel, whatever, but have it run in the browser too so that users can lose their internet connection and merge conflicts "just work" when they go back online.
Somewhat ironically, I thought that was how Node.js worked before I learned it, where maybe we could wrap portions of the code to have @backend {} or @frontend {} annotations that told it where to run. I never dreamed that it would go through so much handwaving to even allow module imports in the browser!
But instead, it seems that framework maintainers that reached any level of success just pulled up the ladder behind them, doing little or nothing to advance the status quo. Never donating to groups working from first principles. Never rocking the boat by criticizing established norms. Just joining all of the other yes men to spread that gospel of "I've got mine" to the highest financial and political levels.
So much of this feels like having to send developers to the end of the earth to cater to the runtime that I question if it's even programming anymore. It would be like having people write the low-level RTF codewords in MS word rather than just typing documents via WYSIWYG. We seem to have all lost our collective minds ..the emperor has no clothes.
I'm not sure what this is a reference to? Is it actually about Rails?
yea? JSX is much more than templating.
Firebase in this context is just a database and how you poll data on client or server from it. Nonsensical reference again.
It works out because it keeps a workforce of React Developers on their feet, learning about the new features, rather than doing other stuff. It's like SaSS for developers, only instead of paying a monthly subscription in cash, you have to pay a monthly subscription in man-hours.
https://github.com/facebook/react/commit/bbed0b0ee64b89353a4...
and it looks like its been squashed with some other stuff to hide it or maybe there are other problems as well.
this pattern appears 4 times and looks like it is reducing the functions that are exposed to the 'whitelist'. i presume the modules have dangerous functions in the prototype chain and clients were able to invoke them.
- return moduleExports[metadata.name];
+ if (hasOwnProperty.call(moduleExports, metadata.name)) {
+ return moduleExports[metadata.name];
+ }
+ return (undefined: any);https://vercel.com/changelog/cve-2025-55182
> Cloudflare WAF proactively protects against React vulnerability
We still strongly recommend everyone to upgrade their Next, React, and other React meta-frameworks (peer)dependencies immediately.
I genuinely believe Next.JS is a great framework, but as an European developer working on software that should not touch anything related to CLOUD Act you're just telling me that Next.JS and React, despite being OSS, is not made for me anymore.
and Deno Deploy/Subhosting: https://deno.com/blog/react-server-functions-rce
I think their time as Javascript thought leaders is past due.
I’m interested in learning more about the history here.
I believe one of the React email services got pwned because they leaked sensitive info via RSC, and there was a whole fiasco around Next.js encrypting server secrets and sending them to the client.
Lo and behold just a couple years later, a lvl 10 RCE because of the complexity of their RPC approach coupled with the blurring of the lines between client/server...it's not like it's surprising to us. A repro of the vulnerability is on X & Github if you want to search for it, it's a classic deserialization bug that only exists because their format is so complex (and powerful).
Remember a lot of us use React as a UI library and to see it causing our servers to get pwned is what people were uneasy about when they announced RSC.
Unfortunately much of this discussion is on X which makes it hard to find, especially because I think Dan Abromov deleted his X account.
> Experimental React Flight bindings for DOM using Webpack.
> Use it at your own risk.
311,955 weekly downloads though :-|
While scores are a good way to bring this stuff to people's attention, I wouldn't use them to enforce business processes. There's a good chance your code isn't even affected by this CVE even if your security scanners all go full red alert on this bug.
I guess now we'll see more bots scanning websites for "/_next" path rather than "/wp-content".
However, if I am reading this correctly, your PoC falls in the category described here: https://react2shell.com/
> Anything that requires the developer to have explicitly exposed dangerous functionality to the client is not a valid PoC. Common examples we've seen in supposed "PoCs" are vm#runInThisContext, child_process#exec, and fs#writeFile.
> This would only be exploitable if you had consciously chosen to let clients invoke these, which would be dangerous no matter what. The genuine vulnerability does not have this constraint. In Next.js, the list of server functions is managed for you, and does not contain these.
Context: This is from Lachlan Davidson, the reporter of the vulnerability
So I don't think this mechanism is exactly correct, can you demo it with an actual nextjs project, instead of your mock server?
1. npm start 2. npm run exploit
I'm debugging it currently, maybe I'm not on the right path after all.
Delete this distraction to genuine blue teamers and stop shitting up the information landscape with this utter hogwash.
This is why infosec is dead.
https://github.com/ejpir/CVE-2025-55182-poc/issues/1#issueco...
mind you react in 2017 paid my rent. now cz of the complexity I refuse to work with react.
What do you like to work with now?
It's totally fine to say you don't understand why they have benefits, but it really irks me when people exclaim they have no value or exist just for complexity's sake. There's no system for web development that provides the developer with more grounded flexibility than RSCs. I wrote a blog post about this[0].
To answer your question, htmx solves this by leaning on the server immensely. It doesn't provide a complete client-side framework when you need it. RSCs allow both the server and the client to co-exist, simply composing between the two while maintaining the full power of each.
[0] https://saewitz.com/server-components-give-you-optionality
The criticism is that by allowing you to do something you shouldn't, there isn't any benefit to be had, even if that system allows you to do something you couldn't before.
I mean it's a lot of complexity but ideally you shouldn't bring it in unless you actually need it. These solutions do solve real problems. The only issue is people try to use it everywhere. I don't use RSC, standard SPAs are fine for my projects and simpler
> Even in systems that prevent server functions from being declared in client code (such as "use server" in React Server Components), experienced developers can be caught out. We prefer a design that emphasises the public nature of remote functions rather than the fact that they run on the server, and avoids any confusion around lexical scope. [0]
> In remote procedure call systems, client-side stub code must be generated and linked into a client before a remote procedure call can be done. This code may be either statically linked into the client or linked in at run-time via dynamic linking with libraries available locally or over a network file system. In either the case of static or dynamic linking, the specific code to handle an RPC must be available to the client machine in compiled form... Dynamic stub loading is used only when code for a needed stub is not already available. The argument and return types specified in the remote interfaces are made available using the same mechanism. Loading arbitrary classes into clients or servers presents a potential security problem;
https://pdos.csail.mit.edu/archive/6.824-2009/papers/waldo-r...
First, "same code on the client and the serve" is wrong. Since when do RSC's run on both the client and the server?
Also, you honestly believe that wanting to use the same language across paradigms is a "coding bootcamp" thing lol? That something like Blazor was born out of a coding bootcamp?
Have you ever built a web app? Both front and back end? Have you ever had to deal with the tension of duplicating code? Models, validation, ideas?
If you answered yes to those questions but still don't see how de-duplicating code like that can be important, than I'm 100% positive you're still in the boot camp.
Those who don't learn history are bound to repeat it, and all that.
To be fair, they also haven't released (even experimental) RSC support yet, so maybe they lucked out on timing here.
I have built electron apps for fun that utilize React Server Components entirely offline.
Basically you're technically correct with your quip, but engaging in some pretty awful security analysis. IMHO most people reading this headline are not going to understand that they need to audit their server dependencies.
With SSR, those round trips to the server could be down to single-digit milliseconds assuming your frontend server is in the same datacenter as your backend. Plus you send HTML that has actual content to be rendered right away.
A truly functional pageload can go from seconds to milliseconds, and you're transferring less data over the wire. Better all around at the expense of running a React Server instead of a static file host.
SSR can be a game-changer in domains like e-commerce. But completely useless for some other use case.
RSC advantages are a bit more complex to explain, because even a simple portfolio website would benefit from it. Contrary to the common belief created by long-term ReactJS dev, RSC simplifies a lot of the logic. Adapting existing code to RSC can be quite a mess and RSC is a big change of mindset for anybody used to ReactJS.
Delete this distraction to genuine blue teamers and stop shitting up the information landscape with this utter hogwash.
This is why infosec is dead.
https://github.com/ejpir/CVE-2025-55182-poc/issues/1#issueco...
Those who are choosing JS for the backend are irresponsible stewards of their customers' data.
Even if that's true, it is irrelevant.
- You need to decide package manager and everyone has their favorite one: npm, yarn, bun, pnpm ...
- You need to depend on npmjs.com for dependencies, which has an unusually high number of malicious packages compared to other dependency sources.
- You need to use some framework like Next.js, which itself is a cesspool of backward-incompatible changes, combined with outrageous security issues