Meanwhile, some Google engineer realized you could solve 90% of phone-to-TV streaming applications and 100% of the hard technical problems by just telling the TV to download and display the YouTube video itself. Genius!
Any real-time or interactive display will need to be able to stream at sub-frame latencies. At 60fps that means less than 16ms, at VR friendly refresh rates ~90fps that means 11ms.
While their approach works beautifully for their core competencies, static and non-interactive streaming content, it doesn't really work for any other application.
Certainly the concerns and dynamics of the situation are different now than in the 70s and 80s, but some of the thought processes are the same. People want to stream video games because they don't have $2000 up front to lay out on a gaming PC. Streaming lets them pay $5 a month instead, and unlike credit, there is no commitment. That's valuable. Greed is another reason for the cloud. There is no reason why someone should pay $10 per month for Photoshop, but since it's the only option, people do. That's free money for Adobe's shareholders.
I can see why people try to poo-pooh this stuff; computing is built on hobbyist experimentation, and the cloud takes all that away. You can't write your own video game. You can't tweak settings, or make mods. You just get a game that someone else made. But from a technical standpoint, streaming stuff is probably going to work. I have less than 1ms ping to a nearby datacenter (speed of light distance: 8 microseconds), and so do 10 million of my neighbors, so it's probably quite profitable to have a collection of high-density GPUs and CPUs rendering games for a few peak hours a day and then training machine learning models outside those hours. The technical challenges are minimal; the idea has been around for 50 years. The actual challenge is getting the people who own the cables in the ground between your house and that datacenter to actually switch packets quickly enough to make it all work. When you were connecting a mainframe in the basement to terminals upstairs, you made it work because it was your job. But now, one company owns all the cables and another wants to make content to send over those cables, and the incentives no longer align. Sure, Spectrum COULD update their core routers... but they could also not do that, and then your video game streaming service is dead. (Meanwhile, they dream of showing up and making their own video game streaming service. They have as much time as they want, because they own the cables!)
I also remember in 2008 hearing about how RFID would soon be ubiquitous on consumer products like UPCs and you could just load up a cart with groceries and walk out the door without scanning anything. That one may actually pan out, but it is much later than it was supposed to be.
A renderer is a device that receives commands to go pull some media from somewhere and start playing it. And that can include video.
So the concept has been around a long time.
Airplay: 2010
Miracast: 2012
Google Cast/ChromeCast: 2013
There are practical problems with the technology though, as light sources we can currently make have some minimum size limitations, and incoherent optical behavior starts to degrade at very small lens size (at or below micron scale I guess).
[1] You could probably create a good approximate phased array optics with led-scale (~ 10 micron scale) coherent lasers as light sources, but again I don't see any application that's not scientific
Perhaps actual phased array optics wouldn't have that issue?
For those just entering this thread, [1] is an example of a rudimentary microlens array display.
holographic displays would use eye trackers to show each eye a different image. Solid state zoom is maybe a bit of a stretch, but it would involve pixels becoming sensitive to angles more inward or outward from the sensor's center.
This is one thing that really pisses me off. Time and time again you've got small(ish) companies doing interesting stuff, succeeding and then they step on a landline. They do something that gets them in the cross hairs of a big company and suddenly BOOM big company buys small company for ridiculous money and then inexplicably shuts down 90% of what the small company was doing. The sale happens for a nice premium and yet the second the sale is closed 90% of the things that the company did that made it valuable are jettisoned. How can it be that these companies can afford to buy companies at a premium, throw away massive parts of the value of the company and yet: this obvious value destruction seems to be standard operating procedure for large companies.
It's almost like the lack of robust anti-trust prosecution by world governments have so enriched large, rent-seeking companies that they can literally afford to burn money and still come out ahead...
If it were valuable they wouldn't be jettisoned
Maybe a dumb question, but how is it even possible to do SDR with 60GHz signal on a ~4GHz CPU via a 5Gbps USB3 connection?
EDIT: I guess via down-conversion? https://en.wikipedia.org/wiki/Digital_down_converter
Or, do what old computer architectures did when their CPUs were slower than their DACs: add a Programmable Interval Timer (i.e. a very simple synthesizer) in between, such that you just send a few commands and it adds together some 60GHz triangle and square waves to achieve the signal shape you want. Maybe even add a sequencer, and then stream it some 60GHz MIDI files!
https://techdocs.altium.com/display/FPGA/NEC+Infrared+Transm...
TX/RX rates are thus independent to the carrier frequency. Only local oscillator must to be able to be configured to the correct carrier frequency.
That explains how close together the antennas are - close enough compared to wavelength to be able to beamform.
Edit: also explains why it would be extremely difficult to build something yourself at 60GHz - where every wire needs length to be matched to submillimeter length, and a submillimeter tail acts as an antenna and as an electronic component.
It still gets me the level of miniaturization that happens when you come back to an idea 20 years later, instead of watching the incremental changes along the way.
It sounds like it would be very suitable for a VR headset.
I believe RTL SDR did extend the RTL products end of life much further.
COuld this occur with this (or similar) phased array chips?
https://www.digikey.com/product-detail/en/acconeer-ab/A111-0...