"Here's an optimization for when the X display is the same machine as the process."
"We've built Wayland. We all run applications on the same machine as the display and X gets in the way."
"We'd like to run applications from machines over the internet."
"We've built Wprs. It allows you to run graphical applications over a network."
That's the circle of life.....
In the simple case of the past, it made sense to send the primitives. In the complex now it's way more robust to stuff the bitmap over the wire.
Much of the reason RDP and WebGL work so well is that they send the raw command stream/assets to the rendering side rather than continuously sending a stream of compressed frame buffer data. RDP, for example, can send the raw video stream being played, or recompress it, and send it rather than trying to render it to the screen and then recompress the resulting output, which is what you get with something like VNC. Thats why its completly possible to stream video over RDP on links that choke up pretty much everything else.
For an incredibly wide range of things, high level GUI command streams are going to be significantly less data intensive than the resulting rendered and compressed video stream. AKA the GUI command stream is itself an applicaiton specific compresion algorithm. A draw line command can affect tens of thousands of pixels around the line due to antialiasing, and that won't ever compress as well as the drawing (x,y,brush). So while sending a bunch of textures, shaders, and the like might be a huge initial overhead, its going to quickly pay for itself over the lifetime as used in something like a game/etc, particularly at high resolution and refresh rates.
Never mind, of course, the overhead of doing a bunch of local rendering, compressing it, sending it over the wire, and doing video playback. If it weren't for hardware video encoding/decoding, it would essentially be a case of pegged CPUs on both sides just doing graphical updates.
This may just be one of the issues with Wayland compositing; over time, I've become more convinced that the loss of standard GUI widgets and a serializable drawing stream might be a huge mistake.
this sounds scammy
X earned its place in the UHH over 30 years ago, at least let’s not pretend this was some universally beloved technology either.
Are there? I don't think anyone seriously thinks X is the way forward do they? They just don't like some of Wayland's poor choices, which I think is fair.
Local-first is the correct approach as the end computers are significantly more powerful than 30 years ago. Plus, we can transport bitmaps far more efficiently with compression, so it’s quite obviously an improvement on every metric, it is pretty shortsighted to note it down as circular development.
While true for things like firefox, and GUI libs that think they know better than the host, its not ideal, and entirely dependent on the language/toolkit being used. And it's a side effect of X being too low level and not having a standard widget toolkit as one gets on Windows/mac. On those platforms, it's not just the widgets but file open dialogs provided by the OS, skinned consistently across applications, and things like global keyboard shortcuts (ex, copy/paste) tend to work close to 100% of the time.
If one goes back to the win3.x/macos7/etc timeframe it was almost unheard of for applications not to follow system wide color themes. WHich is why "dark mode" 1995 worked more reliably than it does on any platforms today. Today, doing something like inverting to a "dark" theme is suddenly something that each application hacks in, and when the OS provides a theme option, the applications frequently don't follow it correctly/etc.
All this could have been avoided, and yet here we are…
Running software locally has been a requirement for at least a few years by my calendar, yet I don't see you complaining about their network-first-patch-local-after approach.
Also, I'd like to plug Sunshine - I think it works on Wayland, and from the tests I did it does a fantastic job on a local network to make a desktop usable, even from an Android device. The downside is that whatever happens on the remote screen, it must happen on the host screen as well, so you can't have the host locked while you do gaming on it for example.
That doesn't mean spice is being deprecated.
EDIT: just an FYI for everyone interested, here's a bit of info about how Proxmox is handling it. TL;DR Spice is still at least maintained until 2029, so it's not going to just go away anytime soon.
Gives you an X server that you can connect to with spice. It's bare bones (no audio, etc.), but looks useable.
Every other Linux remote desktop solution I tried was a mess, including Xpra (which was quite involved to even get installed and Debian, and then a letdown, yikes).
I wonder what happens with Wprs when you try to launch virt-manager, which requires root.
Microsoft has similar functionality but it's gated behind enterprise licenses. I ran Xpra for free when I was a kid and it was magical; many years later I've still never played with Windows' Terminal Services, which is the MS thing that can do rootless IIRC. The way MS gates the necessary windowing features is why to this day you can't do rootless with X2Go or Xpra with normal Windows clients.
Is there special reason the usual GUI PolKit privilege elevation wrappers can't work with a remote X session? I think I used to be able to use kdesu and similar just fine over Xpra. Note that 'rootless' in this context (windowing) means 'without a root window', i.e., without rendering a desktop. It has nothing to do with the usage in projects like podman where 'rootless' means 'not requiring elevated privileges'.
Likewise, I think the overwhelming majority of tiling window managers are very keyboard-oriented, so if you use the same or a similar one on a remote rooted vnc session, there will be conflicts between local and remote keybindings unless you're really careful about it. Better to just let the local WM manage everything.
E.g. (1) open a session on desktop with firefox, thunderbird, vscode. (2) Then grab laptop and go to a coffee shop, connect with wayland-tranpositor to the desktop to resume the session with those programs where you left off with them. (3) Then go home and resume from the desktop.
I guess for (1) and (3) you are just running both the server and client on the desktop, and then for (2) you close the client on the desktop and run it on the laptop?
It would be cool if it could do shadowing, or have multiple clients running at the same time.
I did this a whole lot in the lab with Windows - walk away, and log in remotely and monitor my session, then head back in and log back on locally.
I never found a way to do this with Linux that just restores the session the way Windows seems able to. Instead all I ever found was "the remote screen is unlocked and the cursor is moving" or whatever.
Maybe you could do something with dpms but that leaves the keyboard active still.
If you start the application in a local wprsc connecting to a local wprsd instead of in the local compositor directly, you can later connect to the session with a remote wprsc.
Locking the screen (the other subthread in here) is not applicable to this, applications are drawn wherever the wprsc is running and only one wprsc can be connected at a time.
high refresh rates and low latency are non-trivial problems because they require realtime video encoding which is less bandwidth efficient than offline encoding which means you need a network transport which can shovel high bandwidth data at low latency and gracefully handle packet losses. if you do it over the open internet you also need to adapt to variable bandwidth. all these things have been done, but someone would still need to tie all of it together in an open source solution. There's the sunshine/moonlight combo for remote game streaming, but it lacks other features for remote desktop use.
https://github.com/wayland-transpositor/wprs#comparison-to-w...
x->w, ra->rs
Maybe a dumb question but what rendering method does Wprs use?