For advanced use-cases that's certainly a way to explore. For the average user I'm not sure whether I would recommend to go the route. You lose a lot of stuff that HTTP gives you out of the box, e.g. a request/response abstraction, dozens of frameworks and mechanisms for authorization, logging, etc. HTTP is also easily load-balanceable, while websockets are not. HTTP/2 in principal makes small request/response exchanges as fast as you could implement them with a custom websocket protocol, so there's no more gain. And SSE allow to push server updates in a form that still fits the HTTP model and frameworks pretty good. There are still lots of ways in which websockets can improve reactivity, but you have to invest a lot of engineering effort into it, which might not be worth it depending on the application.
Is all this doing trying to encourage awareness that using websockets is good? If so, awesome. Else, what is it? Because it does not discuss how Websockets lose state on reload. Your game will be in a completely wrong state. Managing and maintaining state is the important thing (which is what https://github.com/amark/gun does, which is of course also "websockets-first"). When you reload your game, it should return/remain as it was, or else half your players are going to see a half-broken system!
The framework as I've written takes state into account; in fact it was/is a design goal. There are three types of server objects currently - WebObject, StaticBuffer and DynamicBuffer. A WebObject stays instantiated on the server even when no users on viewing it (at some point it will be garbage collected).
At any time, rendering that object will produce the exact representation as seen by someone who has viewed it and been updated with websockets. In fact, the card game is a good example of this. Start a game, draw a few cards, start another game, and then go back to the first. It'll be in the same state you left it in, complete with the events that occurred while you were gone. This was one of the top goals.
DynamicObjects are just arrays of WebObjects in a configurable buffer - think of a list of sports scores, where each item handles its own update.
StaticObjects are an array of simple strings of html (think of tail to view a log file).
So next up, how do you handle conflicts? Two users write to the same thing at the same time? Great work!
Do you have a way to cluster websocket servers so that events are propagated to all clients?
>How do you deal with the potential that a browser goes offline and misses messages?
It depends on how long the browser goes offline for is it a transient disconnect (< 2 minutes) or is it an extended period of being offline. We deal with those two specific cases differently. In the first case, we use a sequence based system wherein each message contains an incremented counter. The server also holds a buffer of the last N messages sent to the client (N calculated by the expected velocity of messages being sent to the client). If the client disconnects, the server will keep that buffer alive for a few minutes (accumulating new messages in the buffer as well). The client can then reconnect, telling the server what the sequence number of the last message it received was. The server can then replay missed messages. In the case of an extended disconnect - we'd treat this as a fresh connection, where we do a full re-sync of state. The client then relies on the real-time event stream to keep its state updated.
>Do you have a way to cluster websocket servers so that events are propagated to all clients?
Yes. Our cluster is built in Erlang/Elixir and consists of several components for fanning out messages. For example, we're able to fan-out 1 message to ~25,000 clients in <0.1ms. (The use-case here is in a massive chat-room - we're able to fan out a message to all the connected users quickly).
Here I have the clients ping the server every 2 seconds, and if I haven't received a message with in 10 seconds (including other messages than ping) I consider the client dead.
Each socket is assigned to an individual player, and if that player opens a second connections the old one dies.
To be honest - In regards to the missed messages and order of events, I just cross my fingers and hope TCP does that for me.
In regards to the clustering, I have my map split up into sections, so most of the messages that needs to be send, are only to the people in that sector. So I rarely send a message to all players.
I'm still experimenting, so I'll probably still have a lot of edge cases that I'll have to cover. But for now it seem to work well.
At any time, "updates-applied" browser version of the current state of the object can be created with a call to the #content method.
So when a "updatable" occurrence occurs on the browser, you send an update method withe the changes. The changes are packaged up into messages that sent to each subscriber, where they modify the dom.
It's a bit of a reversal, probably, with how people use frameworks that dynamically change content. I make the assumption that the server's instance is the only keeper of object state. Changes made on clients either change object state or they don't; if they are not sure, they send an event back to the server which handles it.
A @WebObject can also have properties that are themselves WebObjects, and the current card game actually has three: ChatRoom, GameStats, and GameObserver.
This pushes the responsibility for rendering content directly where it belongs: on the object where the rendering occurred in the first place.
But it has the side benefit of keep object state, too. The CardGame doesn't have to worry about the state of the ChatRoom (though it can observer and send events to and from it).
The bottom line is that there are only two ways the browser rendering could be wrong: 1) Some packets were missed, or 2) The developer didn't send the correct events, or sent them in the wrong sequence.
The first case could be solved by creating a delivery confirmation layer over the objects.
The second case is probably a little more dicey. The more data contained in a single updatable chunk, the harder it is determine when state changed. That's helped tremendously by the idea of nesting WebObjects (A Room has CardGames which have Players and Games, which have Decks and Cards....each of which take care of updating themselves)
This territory is well explored -- services like firebase/parse are a testament to many usecases.
IMO, this isn't per-se a new paradigm in the likes of offline-first, which is advocating for local application cache/data synced with a remote store, this offering the benefit of offline apps, with the sync/realtime benefits of network connected apps.
If you look how regular websites work, I think it is.
In my last project, people were kinda baffled, why the changes they made, didn't propagate to the other users instantly.
5 years ago everyone knew you had to refresh a page to see changes. Today everyone seems to expect these changes to be pushed to them.
It also was a fight to get this whole stuff working with React and Redux. So I think, if you want to build a good realtime site, you have to think about this stuff before choosing libraries, if you want to get good results.
For example, GraphQL has subscriptions that can be delivered via Websockets.
Google is also working on GRPC for the browser. Tomorrow it could be a completely different protocol, but it's all in the pursuit of making the app more realtime.
Or there are more problems?
Want to use SSE for one small project soon (haven't used it before, and don't want Websockets as they're an overkill for a basic event stream), would appreciate any info about how things could go wrong.
We'd have a more standard and open web with fewer set backs. More engineers working on the same engine. A larger team working on a shared native application for everyone to build on for them to sell ads too. It even makes good business sense for them now. One can dream right?
A good talk at the matter can be found at https://youtu.be/NDDp7BiSad4
I never understood why web sockets aren't just a straight break out of HTTP(S) to a naked TCP or SSL socket. Why all the extra complexity? All it does is degrade performance and bloat software.
I hate the way protocols are designed by committee, virtually guaranteeing that everything has layers upon layers of cruft that have never actually been needed by anyone for anything but must be implemented because it's in the spec.
The presentation goes through a bit of the history and lead-up to websockets for context.
One of my favorite things about the demo for the presentation was that everyone in the room (probably about 50 people) was able to connect to the live demo in real-time and some people started finding XSS vulnerabilities in it right away, and using the app to send emojis and pictures to others in the room in real-time (all harmless stuff). It was crazy though how many events you could see streaming through the app just running from my laptop with so many people connected and interacting at once (including x,y coordinates for mouse movements a few times per second per user as well).
The presentation goes through alternatives as well, including long-polling, short-polling, and server-side events (the slides are laid out in two dimensions, so pay attention to when you can hit the "down" arrow instead of the "right" arrow for more slides). It also delves a bit into the actual websocket spec. It was a while ago though, so it doesn't go into anything HTTP2-related.
For me, the project I keep rebuilding to learn new languages/frameworks is Risk (though it's been quite awhile now).
After spending so much time with Phoenix channels on my main project, I couldn't resist the fun of reimplementing it over sockets. It'll probably wind up on GitHub when I'm done.
What I mean is why not write a sockets Phoenix channels transports adapter to be used instead of the default one & continue using the existing codebase? :)
Instead of writing: "I couldn't resist the fun of reimplementing it over sockets."
I should have wrote: "I couldn't resist building another version of Risk with Phoenix/channels/sockets because it should be mind-meltingly easier compared to all of the previous times I'd built it on other platforms"
It's just the case that you can't upgrade from a HTTP connect/request to a websocket connection, since at that point the connection is already in HTTP/2 mode and both HTTP/2 and websockets need complete control over the TCP stream.
But: You can still make a HTTP/1 request to the server and upgrade to websockets. Both servers and clients will speak HTTP/1 in parallel to HTTP/2 for a loooong time - most likely forever.
That said, lattice is completely event-driven. The #on_event and #emit_event methods act as entry and exit points. As long as I can have string come and go between the server and client, HTTP2 would be fine.
This seems like a pretty good article covering it
http://blog.bugreplay.com/post/152579164219/pornhubdodgesadb...
How many in comparison to the number of normal web clients that do some XHR call from time to time?
I have written the framework to use a single socket connection for all updateable objects on the page.