The only reason they can do that today without specialised hardware a-la OnLive is because Nvidia and co. now have specialised GPU circuits allowing one to stream the encoded video output to memory.
When OnLive started they had to do it themselves.
I don’t believe you have any idea what you are speaking about.
Could you clarify what you (and the comments mentioning NVENC below) are talking about ?
As far as I understand it, NVENC and the ilk are solutions that capture video output and encode it to H264. So they are x264 encoding accelerators, nothing more.
If you were running a datacenter this way, you'd be much better of having a bank of Matrox capture cards (which support multiple simultaneous inputs) in dedicated hardware converting your video game output to x264 streams for broadcast. They even support capturing, x264 encoding and streaming to IP addresses on your network for further distribution.
What would be interesting is if they bypass the video display output phase completely and render the finished framebuffers to memory (or an encoder chip via DMA over PCI) instead. I assume this would cut down manufacturing (+licensing for HDMI ?) costs a bit.
I know AMD has DirectGMA that allows other devices on the PCI bus access to limited chunks of GPU memory. There are signalling mechanisms in DirectGMA so that devices can basically implement producer-consumer pairs.
As far as I know this doesn't exist in consumer chips though. You need "workstation" GPUs. Which might explain Google's particular GPU choice, now that I think about it.
>> As far as I understand it, NVENC and the ilk are solutions that capture video output and encode it to H264. So they are x264 encoding accelerators, nothing more.
Me:
>> allowing one to stream the encoded video output to memory.
You're being pedantic. What I meant is that OnLive actually needed specialised hardware because they could not provide their service without it (as in: there was no capability in hardware, without changes to the codebase - to render not to an external display). You, on the other hand - are speaking about the conceptualised "perfect" implementation.
Google doesn't need that hardware in order to provide that service because by now there's a hardware capability that allows you to capture the screen of the device you are rendering to. They could try to be more efficient by using other hardware, but that's not a prerequisite for their Gaming Service.
That's exactly what NVENC, Capture SDK, AMD's AMF, and Intel's QuickSync do. Just encode the framebuffer output of whatever game engine you're using with one of those API's. (And remember to make sure your game engine license allows you to use it that way. As always, if you use Unity, you're out of luck, they specifically forbid you from doing this.)
Can you give a little more info on this? Is this standard to consumer level GPU's? What is the tech called?
The NVENC API allows you to do this on your commodity NVidia card. If you have the good stuff from NVidia, you can likely use the Capture SDK, which is a bit more scalable. If you use AMD there is AMF. And if you're cheap, you can just use QuickSync from Intel. All of the encoding API's will likely be unified with an industry wide API a la opengl at some point in the future, but it won't be implemented by everyone in a timely fashion, so you have to pick your poison right now.
One gotcha though, you can only do this if your game engine's license allows. Which means anyone who uses Unity is out of luck. Unreal and other engines? I don't know, I think you can talk to the companies and they'll probably? give you permission? (At least I'd hope they would give you permission until they go into the cloud gaming business.)
Your best shot though is just to take an open source engine, like Godot, and "NVENC it up" so to speak. It'll take maybe a couple of hours and it'll save you a lot of headache.
They may be referring to ShadowPlay, a feature that keeps a buffer of the output so you can save something that just happened. Again, available for 4-5 years.
You are gatekeeping based on irrelevant details when my point is that prior entrants had higher or untenable costs and Google might not, which I think you agreed on.
Google's advantage is that it doesn't need a dedicated hardware play, has saturation in relevant markets in their existing hardware plays, and has existing capacity in their large compute infrastructure. Whether consumers care is another story, but they can throw more money at things to make them care without having other untenable overhead costs as a distraction.