Stuff like MMO for eg. you dont care about latency as much, your payment model already works with it and it solves several major issues :
* no hardware barrier to entry for high end graphics
* no instalation/play anywhere
* optimizations from shared state rendering, potentially advanced rendering techniques (world space lighting) - you can share computation and memory between multiple clients this way
* probably an order of magnitude harder to cheat
A while ago I was working on a flappy bird clone (as a test bed for the technology) that kind of did this. The app ran locally on the device, however it created a text-based record of all objects and their movements on the screen, and at the conclusion of each game, had the ability to ask you if you wanted to upload a video of the game you just played to YouTube. If you selected yes, the small text record of the game was uploaded to the server, where it was used to create a video of the game as you played it and upload it.
The idea was that if it were super-easy and used almost no mobile bandwidth to upload video of game sessions where people liked their results enough to share, it would go viral. The technology worked well in tests, but then flappy bird popularity kind of died before I released it. Now I'm working on implementing it in another game.
- the introduction of replays in Halo 3 (which worked the same way - by saving the data of the entire session, one could freely move the camera around the entire map and observe any part of the game at any point in time)
- Super Meat Boy's level-end combined replays (which replayed all of the user's attempts simultaneously, creating a pretty amusing sort of "heatmap" effect)
I think this will eventually become standard for games where replays would be valuable or fun to watch. But I'm not sure about actually rendering live games server-side until we're at a point where input latency is unnoticeable.
I think TIS-100 and other Zachtronics games do that as well for score validation.
Also known as what every single ID Engine / GoldSource / Source[1] / Unreal Engine / many more engines do for records and replays, except they do it in a binary format. It's not exactly a novel idea.
Haven't you heard about Gaikai (now owned by Sony) that provided cloud-based gaming. https://en.wikipedia.org/wiki/Gaikai
The problem is latency. Now you have input latency from you wireless gamepad to/from your PC/console, the rendering loop CPU/GPU time and then also the latency of your network and the server CPU/GPU time. Good night. If you live in a big hub city near the cloud datacenter you can use it, or if you like slower paced games. But if you like to play Quake (read: very fast paced first person shooter action with 120+fps) forget dreaming about cloud streaming your games now!
Server side rendering would exploit the fact that all of your clients are using the same resources to render the game - so you only need to keep 1 instance of mesh x in memory and you can render it for all clients.
As I said below if you have shared instance worlds like MMO you can do much more sharing, you can have shared effects just like you have a shared simulation for clients - the effects can then be optimized for techniques that might not be viable on consumer hardware - for eg. you might have some sort of world space global illumination technique - like radiance transfer. These techniques aren't usually used because you need a lot of memory + processing power - but you can get server GPUs with >10 GB of memory and more in the future you could have one dedicated GPU that would just compute global illumination and then feed view specific data to each client view rendering thread.
It would completely change the way you render games - right now most effects are done screen space because it's faster (eg. deferred shading) but if you could share world scene state you could get a lot more fancy with world space effects. Also the rendering pipeline would need to be a lot more asynchronous to avoid latency (eg. global illumination could be a separate process that would sync occasionally and not every frame to avoid extra latency in a trade-off for lighting latency).
Square Enix's solution https://www.shinra.com/us
No more client-side hacks (to the .exe) that add draw calls for things like - draw a red rectangle around enemy units, or force draw them blended so that they can be seen, or turn wireframe on, make all significant audio highest volume (so you can hear steps easier), and remove anything else, etc., etc.
Economically it'd be a disaster also. Since many games max out resource usage on the machine they're on, you're not going to have a lot of shared hosting. So now you're talking about having an (expensive) machine for each concurrent user.
You're always bound by the same latency - your input goes to the server and server returns world state for your PC to render - the only extra latency in this scenario is time required to encode-decode the data which could be offset if the server is faster at rendering than the client PC.
As for economics I already said you can exploit the shared state very much in games if you rework the way rendering works. Right now games focus on camera space rendering because they only care about 1 view output and view space effects are cheapest in this scenario.
If you have shared state rendering suddenly you can process the instance state once per frame for 100s of users just like you simulate the game for 100s of users per instance.
You can have specialized hardware setups (ie. multiple high end GPUs with >10GB of ram) doing dedicated tasks like recomputing lighting/shadows, animation, particle effects, etc. you would need to find a way to make these systems asynchronous to reduce lag so lighting updates might lag a frame or two for eg. behind animation but as usual with rendering there are always clever tricks to fool the eye - and once you have gigabytes of ram at your disposal very different rendering techniques compared to currently used ones become viable for shared state rendering.
Someone already linked a platform that's already doing this - I have no doubt this is the future of VR and gaming at least in some part (maybe you won't be streaming video to the client but some geometric 3D world representation with deltas so that the final rendering can be done client side and you can have low latency rotation for stuff like VR)
https://games.amazon.com/games/the-unmaking
"Powered by Amazon’s AppStream -The Unmaking is the first game to unleash the power of the cloud so players can experience thousands of enemies, destructible environments, and an epic, cinematic soundtrack on their Fire tablets."
May or may not be cost effective.
I'm highly skeptical of the viability of your point #3.
Cloud rendering would be something like you and me play the same game and the server rendering only has 1 instance of every resource used for rendering to reduce memory overhead.
If you have shared instance worlds (eg. MMO) you can then do shared state effects like animation, advanced lighting, etc. and reuse the calculations for each client.
Some very slow MVA monitors lag as much as full frame behind input, so in extreme cases you could get equal delay on your laptop.
I'm not sure how much latency OSX or Chrome adds (at least a frame more than Firefox[4]); Mobile Safari seems to be a bit faster, as evident in the video. I don't have a second Windows machine for comparison.
Edit: When connecting on the same machine, latency is 2 frames (33ms) exactly[5].
[1] http://phoboslab.org/files/jsmpeg/jsmpeg-vnc-latency.mp4
[2] http://phoboslab.org/files/jsmpeg/jsmpeg-vnc-latency.jpg
[3] http://www.anandtech.com/show/2922/4
[4] http://phoboslab.org/log/2012/06/measuring-input-lag-in-brow...
Tell me about it, I've been trying to gather info about a solution where I can record with a camera, probably 1080p (via ethernet or via hdmi to capture card) and use the frame information as inputs in a simulation with a latency around 20 ms and am unable to get hard data on the actual latencies. Nobody cares about input lag because practically no one records as a real time input.
Incidentally, if anyone knows or has experienced with a similar setup I'd be forever grateful.
Another thought that occurs to me is that people who work with music equipment (especially usb synths and such) care about latency a lot. It maybe worthwhile to hook into that scene and see what kind of insights can be gleaned. Here's a link to get you started: http://www.mathworks.com/help/dsp/examples/measuring-audio-l... Looks like they're basically using a feedback model as well.
However, the thing that worries me the most is not latency or graphical fidelity, but the way this will affect the actual design of AAA games. If I as a developer/studio know that my AAA title is going to be remotely controlled, possibly accessible only via subscription, and the end user's progression monitorable at any point, the opportunities for monitization skyrocket. You think DLC now is bad? Try playing Skyrim and being able to buy in-game gold at any point via browser pop-up. Just died in Dark Souls? Respawn with no lost souls and kill all enemies in the vicinity for only $1!
I'm not saying that all games will end up being this way, but a certain amount of AAA titles may end up going with more mobile-oriented pricing models if this sort of remote gameplay becomes mainstream.
one option is to have AA studios / publishers fund teams of "indie" devs with license to go nuts exploring new aspects of the medium.
Personally, since we're running full-tilt towards needing something to replace "jobs," I'd like to hope that, at some point, that some places ( countries, planets(?)) might institute a universal minimum income, along with a flat-rate fair tax. Then folks could earn currency by playing games ( or creating art, music, solving interesting problems, whatevs...
Could go a long way towards building space-ships & eliminating poverty / starvation...
I wish the industry would just get their act together and support a common video codec in browsers along with a JavaScript API that can deal with decoding single frames. Currently native streaming support for the <video> element in browsers is extremely poor and proposed solutions like MPEG-DASH or HLS add around 10 seconds(!) of latency.
You will need to include a WebRTC stack in your server though, which is a lot more complicated. But I think you will get better overall performance with it versus TCP - it's what WebRTC was designed for.
Also, you might want to look at ogv.js - if you want to keep going the JS way, it includes a very fast Theora decoder which should still be a lot better than MPEG-1.
[1] https://en.wikipedia.org/wiki/Media_Source_Extensions [2] http://html5-demos.appspot.com/static/media-source.html
They're since defunct, Sony picked up their assets end of April this year (https://en.wikipedia.org/wiki/OnLive).
With the roll out of more fiber networks, this could be a reality (assuming you still had enough host locations to reduce latency).
the latency of in-home streaming is far inferior to out-of-home streaming. Unless the server is just next door to where you live.
Think 5-10-15 years when everyone has gigabit+ fiber to the home.
Actually, I could see this being handy while travelling. Sure, at home I've got a decent gaming rig. But it'd be nice to have decent quality gaming on a low-end laptop...
I think it's cool
Chrome Remote Desktop works OK and it's a sign of things to come in the field of virtual desktops. Right now, the state of the art for on-demand cloud remote development desktops to help facilitate intensive development environments, like IntelliJ / Visual Studio / Mathematica, on underpowered clients, (i.e.12" MacBook), is to rely on proprietary protocols that barely work if the targeted remote machine is a Linux desktop.
Yes, I know about x2go etc, but I've had so-so experiences with it. Compared to streaming games, I wonder if there's a product in here somewhere.
Coming from VNC and Remote Desktop, it's remarkable how low the latency is. In many cases, because I can get a signifcantly higher framerate by remote rendering, the latency is better than if I was doing stuff locally.
For games, it's worked pretty well with everything you'd be happy playing with a gamepad (e.g. GTAV, but not CoD).
We can run arbitrary windows apps and stream them to your web browser, including videogames.
I think the major challenge with remote gaming is input latency.
Even here on a local network, it looks like there is at least ~200ms between input and frame, wich is a blocker for many game types.
One thing to note: It gets a bit confused about where your mouse is if you try to stream one monitor in to a browser on a second monitor.
It allows you to play your Xbox games on your PC.
http://vrscout.com/projects/project-irides-microsofts-cloud-...
- How feasible would it be to host multiple client connections (each running their own instance of the hosted program) ?
Edit: Not that they couldn't do it with GTA V I just prefer those games over it. Saints Row the Third or Just Cause 2 are also acceptable.
https://github.com/jsmess/jsmess
https://github.com/dreamlayers/em-dosbox
https://bitbucket.org/tsone/em-fceux
and others
https://github.com/kripken/emscripten/wiki/Porting-Examples-...
The pointless part is that he's connecting to a "server" that's next to him, not in a DC in London. I guess if you really wanted to play GTA V on the toilet, on your phone, it solves the issue - but really...
1. AWS can run Windows and therefore Steam. Nothing new. 2. Steam home streaming works over VPN. Nothing new. 3. VNC can stream games. Nothing new.
etc.
But put all these pieces together and the first person to wrap it up and make sure all the legal parts are in place might make a fortune.
That being said, in this case these three dots have been joined up multiple times already, so there really is nothing new here. The interesting challenge is in reducing latency so that remote games are actually playable when the server is not on the same LAN.
http://www.pcworld.com/article/2359241/how-steam-in-home-str...
http://shield.nvidia.co.uk/play-pc-games/
http://store.steampowered.com/streaming/
This guy isn't running it from AWS though, like I said. He's running it from a "server" that's right in front of him. By the way, have you ever tried RDCing a remote Windows machine and done some trivial tasks in the GUI?
at 60 fps?
I few years ago I was lucky to get 3 fps from a local server.
That is completely wrong. Remote desktop doesn't really care about latency or framerate. The vast majority of updates will change only a few pixels (e.g. moving the cursor or typing a letter), and when fullscreen updates do occur (switching apps), it's okay if it takes a visible fraction of a second for the update to sweep across the screen.
Gaming needs to be high-speed, it needs to be low-latency, and it needs to be seamless. 30FPS is an absolute rock-bottom minimum, and gamers are increasingly demanding 60FPS (or more). Some specific genres, like rhythm games, are so sensitive to timing that you can tweak their input-detection settings based on the latency of your television.