For games "prediction" is an apt description of this, because the client can make a good guess as to the result of their input, but can't know for sure since they don't know the other players' inputs. But this paradigm can also be used to simply respond immediately to client input while waiting for the official server state - say by enabling/disabling a dropdown, or showing a loading spinner.
There's also plenty of client state that's not run on the server at all. Particle systems, ragdolls - stuff that doesn't need to be exactly the same on all clients and doesn't interact with other player inputs / physics.
If we're gonna have a persistent server connection I don't see a reason this wouldn't work in a reactive paradigm.
It's a clone of https://generals.io/
It's built with LiveSvelte. It doesn't need any predictive features as it's basically a board game and runs at 2 ticks per second. It does use optimistic updates to show game state before it actually updates on the server. The server overrides the game state if they're not in sync.
All game logic is done inside Elixir. To do predictive correctly, you'd need to share logic between the server and the client. Otherwise you're writing your logic twice and that's just a recipe for disaster.
One possible solution which I didn't investigate, but should work, is to write all game logic in gleam (https://gleam.run/). Gleam is compatible with Elixir, AND it also can compile to js, so you could in theory run the same code on the server and the client.
Now this is a big mess to understand, you could say "why don't write it all in js and be done with it" and you'd make a very good point and I'd probably agree. The main advantage you get is that you can use the BEAM and all it's very nice features, especially when it comes to real time distributed systems. It's perfect for multiplayer games, as long as you don't take into account the frontend :)
1) Spawning closer to the center is just strictly significantly worse, corners are best. It's essentially an auto-lose to spawn anywhere but an edge or corner in a FFA.
2) Getting literally all of someone's armies when you kill them is so good that it pushes players to just try to all-in the first person they meet every single time. There is no way to "fast expand" once you've met another player because of the first factor, and also that 40-50 armies on a neutral castle is very high. I think the "early-game" can be much better designed.
3) Perhaps you should be able to change your capital, which could solve problem 1.
4) There are many ways you could keep the simplicity of play, but boost the richness and depth of the actual gameplay. For instance, more chokepoints that would allow actual strategic use of territory and army positioning. Different tiles which have different advantages to owning. Borrowing from something like civ - forests or hills which have a defensive boost when you're inside. Rivers which attacking across is disadvantageous.
Just some feedback from someone who enjoys games like Chess and Starcraft, and thinks the core gameplay loop of generals.io is really fun, but believes that it is seriously lacking in strategic depth.
Rust, with Uniffi, can also be a good candidate. You’d be targeting WebAssembly.
What ends up happening is someone dies and their body flies off or gets stuck in a hilarious pose... and no one else saw it, nor can you rewatch it in the replays as every client renders it differently.
Unless you catch it with a live recording then it's lost forever. With a sometimes goofy game like Overwatch it's sad knowing no one else is seeing it.
https://arstechnica.com/gaming/2019/10/explaining-how-fighti... is an incredible walkthrough of this! Discussion: https://news.ycombinator.com/item?id=34399790 and https://news.ycombinator.com/item?id=26289933
When it comes to server persistence, as in an MMO setting, you add I/O bottlenecks into the mix. https://prdeving.wordpress.com/2023/09/29/mmo-architecture-s... is a fascinating read for that end of things. Discussion: https://news.ycombinator.com/item?id=37702632
I'm not a web expert but the optimistic updates I've seen in web stuff is more like, I'm gonna fetch this url and here's the data I expect back. Nothing wrong with that, but it's achieved in a different way, where the server is all about providing data and the client is all about managing state.
The OP is talking about maintaining a persistent connection to a server which is doing most of the state management. They detail things this does well (makes the server easier to write) and things it does poorly (makes optimistic updates harder) and a solution to the things it does poorly. So I'm drawing a parallel to other systems where state is managed on the server and must be predicted on the client.
e.g. throttle your network and upvote a hn comment. you're not sitting there waiting with a spinner while the server responds, it's all in the background.
the hn implementation isn't great though - if the upvote request fails, the optimistic update isn't rolled back, and you have no knowledge that it failed. for hn, who cares, it's just a lost upvote, but for most modern web apps you would show the user that the action failed
(I gave the talk at ElixirConf 2022 on how to combine them, but the live_svelte contributors have done the work to make it a reality)
IMO there is always a need for client side state, especially for apps with rich UX. I also live in NYC where network connectivity is not a given, especially in transit.
One super powerful feature that the authors don't cover is being able to use Phoenix's pubsub, so that server-side state changes that occur on other servers also get pushed reactively to any client. It's pretty typical to have multiple web servers for handling mid/high levels of traffic.
Liveview isn't that special, the liveview paradigm works best for what would already be online actions in a normal page.
Don’t do this in line-of-business apps where those rows are interactive. The cognitive latency readily induces users into clicking the wrong thing, emailing the wrong customer, refunding the wrong transaction etc. My preferred UX instead is a sticky banner conveying “the data has changed, click here to refresh”. Or, in a pinch, ensure that new rows are rendered append-only with no change in scroll position.
As the article points out there are some good use cases for deviating from the 'LiveView Way'. I would argue that if you have 1,000ms round trips then there is something else to consider but geographically located servers could be unavailable to your team for a number of reasons (i.e. cost) so adding some client-side state management could be your solution.
* States that the client needs to keep track of
* States that the server needs to keep track of
Then on top of those there's two more kinds of states that overlap but they're not quite the same thing: * States that only need to exist in memory (i.e. transient)
* States that need to persist between sessions
There's a seemingly infinite number of ways to manage these things and because "there's no correct way to do anything in JavaScript" you either use a framework's chosen way to deal with them or you do it on an ad-hoc basis (aka "chaos" haha).In the last sophisticated SPA I wrote I had it perform a sync whenever the client loaded the page. Every local state or asset had a datetime-based hash associated with it and if it didn't match what was on the server the server would send down an updated version (of whatever that thing was; whether it be simple variables, a huge JSON object, or whole images/audio blobs).
Whenever the client did something that required a change in state on the server it would send an update of that state over a WebSocket (99% of the app was WebSocket stuff). I didn't use any sort of sophisticated framework or pattern: If I was writing the code and thought, "the server needs to keep track of this" I'd have it send a message to the server with the new state and it would be up to the server whether or not that state should be synchronized on page load.
IMHO, that's about as simple a mechanism as you can get for managing this sort of thing. WebSockets are a godsend for managing state.
In practice, applications need state on both the client and the server. The server needs the authoritative state information (since the client is untrusted), but the client needs to be able to re-render in response to user interaction without a round-trip.
Another thing I like about this is the ability to be able to use Svelte as a templating language rather than Heex.
The difference between traditional technologies that render HTML server-side and LiveView, is a persistent connection to the server which allows it to re-render templates in response to server-side events and patch client-side HTML without writing any Javascript.
When your web application requires the following:
- Large amount of user interaction (requires client side JavaScript)
- Large amount of experimentation (bundle is not static, changes per request)
You are going to want to split up the logic on the server and client to reduce the amount of JavaScript you're sending to the client. Otherwise you have a <N>MB JavaScript bundle of all the permutations you're application could encounter. This may be fine for something like a web version of Photoshop where the user understands that an initial load time is required. But for something like Stripe, Gmail, etc. where the user expects the application to load instantly you want to reduce the initial latency of the page.
You can move everything to the server, but like GitHub experienced you then encounter a problem where user interaction on the page suffers because user actions which are expected to be instant, instead require a round trip to the server (speed of light and server distribution then becomes an issue).
You can lazy load bundles with something like async import, but then you run into the waterfall problem, where the lazy loaded JavaScript then needs to make a request to fetch it's own data, and you're left making two requests for everything.
If you encounter all these problems then you end up reaching for solution which make it easier to manage the complex problem of sharing logic/state between the client and the server.
- State coming from the server as a result of user actions
- Local state which isn't intended to be shared to the server, usually UI stuff.
It appears that's one of their major problems. They're not "building" APIs. They're just slapping "one shots" onto the side of a router whenever the need arises. This speaks to a complete lack of a planning and design phase.
I guess if you want to build something without any plan whatsoever, this might be a way to "improve" that process, but there's a much simpler one that doesn't require your team to become polarized over a framework.
If you run an identical query engine on client and server and then sync data between them, a client can just write a normal query on the client and you get an optimistic response instantly from a local cache and then later a response from the authoritative server. Frankly, this is where most highly interactive apps (like Whatsapp, Linear, Figma, etc) end up anyway with bespoke solutions we're just making it general purpose.
But more stuff like this is always welcome!
The next step is to compile your server to WebAssembly and ship it to your clients. You can then use it to optimistically render responses while waiting for the real server to return.
Sounds a little crazy, but we've actually pulled it off for a project, and its magic.
I've built a turn-based game that worked this way, where essentially every player and the server contains a copy of the same state machine, and the server determines the order of state updates. Like OP said, once you have the framework in place, it's magic.
Kind of similar to the content of this article, I wish people were more upfront with their reasoning. Doing something because "it's rad" is definitely fine by me!
I've been looking at Bun to see if it would help but it's still unclear to me.
Heex rendering is just way faster.
If you don't care about SSR for certain components (for example modals, they don't need SSR generally), then I don't see a clear advantage for hooks.
I didn’t know about LiveView and never used erlang-family languages, but definitely they’re onto something. The traditional request-response model is many times causing a lot of subtle problems with consistency and staleness.
A wishful (probably also controversial) thought: if the last decade was about integrating FP concepts into mainstream languages, then I hope the next decade will be oriented around integrating stateful message-oriented (reactive?) programming into the mainstream full stack.
Disclaimer: I wrote or helped write the first three.
Either way it'll probably take about two hours to have your first rudimentary Phoenix chat application loaded in a browser if you follow some guides and tinker around a bit.
Also known as MVC before Rails decided to redefine the term.
In my app, I use reusable Stimulus controllers alongside LiveView, and it works seamlessly as well.
On a general note, while it's a pleasure to build with LiveView, the more I use it in real-life scenarios, the more I realize the benefits of stateless HTTP frameworks like Hotwire, which feel more performant and resilient to reconnections, and avoid the need to place more servers close to users for stability.
Unfortunately, Stimulus doesn’t actually provide an elegant way to keep state on the client. “A stimulus application’s state lives as attributes in the DOM.” This means that it’s not better than vanilla JS or jQuery.
Edit: I haven’t used Stimulus for a real project; it’s possible their values and change callbacks are a better experience than I originally imagined.
My production app has no more than 200 lines of JS, and I could probably get rid of a couple Stimulus controllers. Live View is that good. I also made a very hacky Stimulus-Live View hook adapter, so my Stimulus controller can send events directly to the LV process.
EDIT: Live View does not require you to run geo-distributed servers at all, unless you have bought into the fly.io kool aid a little too much. And it deals with disconnections beautifully. I didn't even have to do anything to support zero-downtime updates. The client loses connection to the WebSocket and reconnects to the new version, restores the state, all that out of the box automatically. What more do you need?
Performance is comparable when I am close to the server (elixir is slightly faster), but when on another continent any content changes/navigation over websockets suddenly feel very laggy, while navigation over HTTP in the supposedly slower Ruby+Rails is actually consistently fast.
I’ve only recently discovered this as I went travelling to another continent, so will do more perf testing.
But the nature of the always-connected websockets hasn’t been a pleasurable one for me: for instance, a LiveView process crashes and you lose all the data in a big form, unless you write glue code for it to restore. And the experience of seeing the topbar getting stuck in the loading state the second your internet connection is spotty or if you are offline just gives me anxiety as a user.
Our own pattern (LiveAlpine?) is as follows:
Does the component need HTML? Then use an HTML Component. Does the component have server-side state? Then use a Live Component. Does the component need client-side behavior and/or state? Then also define an Alpine component. Does the component need to receive client-side events from the server or make HTTP requests? Then also define a Phoenix Hook.
So maybe this can be the start of a general bridge that can bridge LV with React or Vue components too? And make it easy to put these in a page and interact with LV events, etc.
The only big downside is you also need to setup server side React rendering for the initial load if you care about SEO.
Maybe I should dig it up and post it in a gist somewhere (I don't want to maintain an open source library for it).
Also worth checking out Dustin’s talk about optimistic UIs.
https://youtu.be/v-GE_P1JSOQ?si=3tZeAZqoroN1Vubp
And quick tutorial: https://electric.hyperfiddle.net/
Have you used electric clojure?
So, are all client interactions sent through the websocket? I remember years/decades ago we used to do that with ASP.NET, where every single component interaction was handled by the server. How is this different / better?
Some people think it's kind of nice to have one programming language for everything, including queues, cache, database queries, client layout, business logic.
It seems to go against the wisdom of the last few decades, but the network latency seems to permit it now. Throw some edge computing to the mix and maybe it's all a good idea.
I think Alpine is cool, but it didn't really stick with the team. I think that's because we were still writing our components in LiveView and sprinkling in Alpine. A big unlock with LiveSvelte was getting to move so much into `.svelte` files, but not converting the whole thing to a SPA. Working in a `.svelte` file gives you a lot of niceties that an Alpine-decorated LiveView component won't (prettier, intellisense, etc).
This approach could totally work with other paradigms, like Alpine and HTMX. I think the key is using LiveView as a backend-for-frontend, so writing all your components in `.htmx` files or whatever.
Alpine is something like "AngularJS lite," a lightweight JavaScript framework with some similarities to the original version of Angular.
HTMX is a collection of HTML attributes that simplify replacing HTML on your page with HTML partials from your server.
In the article's example of interdependent dropdowns, you'd have HTMX attributes for catching the change event of the first select, specifying the server URL to fetch updated HTML from, and specifying the target to put the HTML into (the second select).
Some have recommended using HTMX for less complicated client-side interactivity and adding AlpineJS to pages that need more. That said, people have built impressive apps just by leveraging HTMX.
I very much relate to the dropdown example, and I've found that complicated UX patterns can be extremely awkward to implement and maintain in LV.
One example from my experience that was prickly to implement in LV was graying out a chat message you sent if it didn't get acked or persisted by the channel/server for any reason.
Can't wait to try this out in my next project!
Having been involved with web applications for a couple decades there were some weird things LiveView did that weren't easy for me to pick up. Maybe its changed in the last 12-18 months but initial versions were built odd for some seemingly easy types of XHR things one would want to do.
If you like Svelte (I do) you're probably going to find sveltekit to be a lot simpler and more useful.
SvelteKit is much simpler and well-documented and addresses the issues mentioned.
If one has Python and Rust experience, what would be a recommended "first principles" path to get started in understanding web development with LiveView and Svelte?
But then that approach has almost nothing to do with svelte or any other SPA tool so that is something else you’d have to learn.
Personally I’d start with either Phoenix and avoid any SPA tools until you get really comfortable and have hit the boundaries of what is possible with Phoenix. Or alternatively, I’d start with sveltekit and not think about Phoenix and save exploring that for another time.
Phoenix is super cool, but I’d suggest starting with SvelteKit, as you can build full stack apps and the same principles apply if you want to move to react or vue or anything else.
I suppose it's also possible with remix
And yet for some reason the thought of Phoenix + LiveView + Svelte makes me want so badly to try it. Just the thought of playing with it has me giddy. This must be a mental disorder I'm experiencing.
Dissociative Framework Disorder.
...given that managing state is the thorniest of issues, what could go wrong with this approach?