For starters the DOM is a declarative thing - immediate mode means that you have a function called for each frame with imperative function calls written by the user of the immediate mode GUI toolkit for rendering each element of the GUI every time (unlike retained mode where a framework hides that behind some object model). It's not really possible to be more opposite to a declarative approach.
It suggests a particular implementation, but in practice most nontrivial "immediate mode" GUI libraries (including egui [1] and the famous Dear-IMGUI [2] [3] ) retain some "shadow state" between frames. The existence or scope of that state is a (sometimes-leaky) implementation detail that shouldn't distract from the fact that the API presented is still "immediate mode."
[1] https://github.com/emilk/egui#ids
[2] https://github.com/ocornut/imgui/blob/master/docs/FAQ.md#q-a...
[3] https://github.com/ocornut/imgui/wiki/About-the-IMGUI-paradi...
In React's case, you tell it declaratively (your JSX component tree) with some procedural stuff thrown in (JS parts in the JSX). In classical immediate mode, by calling paint functions. But in both cases you tell every time - as opposed to performing actions on a pre-created widget graph. With the caveat that in React's case, behind the scenes, there is a widget graph, the DOM. But that's an implementation detail, as far as the dev writing React is concerned, they re-describe every GUI state.
So, you can say that React is an "immediate mode" abstraction over the retained DOM. In fact that's exactly what devs say about it:
https://twitter.com/ibdknox/status/413363120862535680
That's the inspiration from "immediate mode" that React brought - the GUI intention described by creating a new state, as opposed to manipulating state. And the diff algorithm is also inspired by the diffing algorithms used in immediate mode to draw less for each frame.
That React does this with DOM widgets under the hood as opposed to painting commands, and that those are higher level widgets and not lines and rectangles is not the part of the analogy people emphasize.
retained: user instantiated widgets, and changes their state (e.g. to clicked).
traditional immediate: for every 'tick', user issues commands to draw the new GUI state, usually with some smart diffing under the hood to minimize paint commands.
React immediate: for every tick, user describes the arrangement of widgets, text, etc for the next GUI state, usually with some smart diffing under the hood to minimize DOM changes.
Calling a "draw button" function, as opposed to declaring a "<Button>" in a JSX structure, is not that crucial of a difference, compared to the conceptual change between "manipulating objects" and "telling everything about how the UI should look at each tick".
Heck, a dev doing an "actual" immediate mode could trivially wrap the "draw button" function call to be driven from a declarative file (if you parse the term "button": call draw button, etc). It still wouldn't be retained mode.
And React and co also inspired interest in immediate mode GUIs, and inspired some new actual bona-fide immediate mode, with graphic calls and everything.
React is clearly a different thing to immediate mode and retained mode. You don't have to cram everything into those two categories. We could do with a word for it though.
I don't think the guy you linked is a React Dev.
The terms, yes. The descriptions and understanging of React (as an immediate mode wrapper over DOM) is shared across the community, all the way to Wikipedia:
"One way to have the flexibility and composability of an immediate mode GUI without the disadvantages of keeping the widget tree only in function calls, with the lack of direct control of how the GUI is drawn in the rendering engine would be to use a virtual widget tree, just like React uses a virtual DOM".
What you're arguing is little details about "how things were always done in immediate land" as this was set in stone.
Well, React and descriptions of the UI such as SwiftJS, also shown that you can apply the immediate UI concept to something other than direct drawing calls - namely, controlling a retained mode UI underneath with the same logic you'd call paint functions (or abstract them to "drawButton" and such).
It's not that you're wrong. It's just that you're right about the trees, not the forrest. It's the concept that matters, not the implementation details -- which is like complaining that "Linux can't be UNIX, it doesn't derive from ancient blessed code". Yeah, but it's still UNIX to everybody - and in fact today's de facto UNIX.
>I don't think the guy you linked is a React Dev.
I didn't say he was. In fact it says up there on the tweet he's doing Aurora, and that in that he was inspired from React and its immediate mode approach (he's the guy behind Lighttable/Aurora/Eve).