Also transmitting 10 Gb/s with a led seems challenging. The bandwidth of an incoherent led is large, so are they doing significant DSP (which costs money and energy and introduces latency) or are they restricting themselves to very short (10s of m) links?
on the distance - exactly right. The real bottleneck now in AI clusters is the interconnect within a rack or sub 10m. So that is the market we are addressing.
On your second point - exactly! Normally people think LEDs are slow and suck. That is the real innovation. At Avicena, we've figured out how to make LEDs blink on and off at 10Gb/s. This is really surprising and amazing! So with simple on-off modulation, there is no DSP or excess energy use. The article says TSMC is developing arrays of detectors, based on their camera process, that also receive signals at 10Gb/s. Turns out this is pretty easy for a camera with a small number of pixels (~1000). We use blue light, which is easily absorbed in silicon. BTW, feel free to reach out to Avicena, and happy to answer questions.
The reliability of micro LEDs. and specifically GaN based micro LEDs is however an open question.
In the absence of any dislocation failure mechanisms it will depend on the current density and thermal dissipation. And just like any other material, it will have to survive in a non-hermetic environment and in the presence of corrosive gasses (an issue in data centers).
To get the 10G, it’s probably kind of like a VCSEL without the grating and so current density is probably high. How well you’re able to heat sink it is going to determine how reliable it will be.
Overall I like the idea. It looks like the beachfront could work. I’d spend more time talking about and how the electrical connection works and what kind of interface to a chip would be needed.
I’d also be careful before throwing shade on laser reliability because it could backfire on you (for all reasons above).
The claimed advantage is a very high aggregate throughput and much less energy per bit than with either copper links or traditional laser-based optical links.
For greater distances, lasers cannot be replaced by anything else.
With quantum computing, one is forced to use lasers. Basically, we can't transmit quantum information with the classical light from LEDs (handwaving-ly: LEDs emit a distribution of possible photon numbers, not single photons, so you lose control at the quantum level). Moreover, we often also need the narrow linewidth of lasers, so that we can interact with atoms in the way we want them to. That is, not to excite unwanted atomic energy levels. So you see in trapped ion quantum computing people tripping over themselves to realise integration of laser optics, through fancy engineering that i don't fully understand like diffraction gratings within the chip that diffract light onto the ions. It's an absolutely crucial challenge to overcome if you want to make trapped ion quantum computers with more than several tens of ions.
Networking multiple computers via said optical interconnects is an alternative, and also similarly difficult.
What insight do i gleam from this IEEE article, then? I believe if this approach with the LEDs works out for this use case, then I'd see it as a partial admission of failure for laser-integrated optics at scale. It is, after all, the claim in the article that integrating lasers is too difficult. And then I'd expect to see quantum computing struggle severely to overcome this problem. It's still research at this stage, so let's see if Nature's cards fall fortuitously.
But even more than that, this seems to me like a purely on-chip solution. For trapped ions and neutral atoms you really need to translate to free-space optics at some point.
As for fully integrated optics, it's where quantum computers eventually want to be, and there's no physical limitations currently. But perhaps it's too early to say whether we would absolutely require free space optics because it's impossible to do some optics thing another way.
However, it’s not correct to say lasers are unreliable. It’s fundamentally false and it’s not supported by field data from today’s pluggable modules. 10’s of millions of lasers are deployed in data centers today in pluggable modules.
It’s also useful to remember that an LED is essentially the gain region of a laser without the reflectors. When lasers fail in the field, they fail for the same reasons an LED will fail; moisture or contamination penetration of the semiconductor material.
An LED is not useful for quantum computing. To create a Bell pair (2qubits) you need a coherent light source to create correlated photons. The photons produced by an incoherent light source like an LED are fundamentally uncorrelated.
SNR is obviously an issue for any communication system, however fiber attenuation is orders of magnitude lower than coax.
The bigger issues in this case would be mode-dispersion, considering that they are going through "imaging" fibres, i.e. different spatial components of the light walking off to each other causing temporal spread of the pulses until they overlap and you can't distinguish 1's and 0's.
[0] - Mind you, some of that for Coax is due to other issues around CTB and/or the challenge that in Coax, you've got many frequencies running through alongside each frequency having different attenuation per 100 foot...
https://www.businesswire.com/news/home/20250422988144/en/Avi...
Noting also that there have been multiple articles on IEEE Spectrum about this startup in the past, I really hope the journalists don't own the stock or are otherwise biased.
https://www.nature.com/articles/s41566-020-00754-y
https://www.nature.com/articles/s44172-022-00024-5
As far as I understood, you can only compute quite small neural networks until the noise signal gets too large, and also only a very limited set of computations works well in photonics.
So if I'm streaming a movie, it could be that the video is actually literally visible inside the datacenter?
That's a big part of it. I remember in the Early Pentium 4 days, starting to see a lot more visible 'squiggles' on PCB traces on motherboards; the squiggles essentially being a case of 'these lines need more length to be about as long as the other lines and not skew timing'
In the case of what the article is describing, I'm imagining a sort of 'harness cable' that has a connector on each end for all the fibers, and the fibers in the cable itself are all the same length, there wouldn't be a skew timing issue. (Instead, you worry about bend radius limitations.)
> Would the SerDes be the new bottleneck in the approach
I'd think yes, but at the same time in my head I can't really decide whether it's a harder problem than normal mux/demux.
Things get interesting if the losses are high and there needs to be a DFE. This limits speed a lot, but then copper solutions moved to sending multi-bit symbols (PAM 3, 4,5,6,8,16.. ) which can also be done in optical domain. One can even send multiple wavelengths in optical domain, so there are ways to boost the baud rate without requiring high clock frequencies.
Semi-accurate. For example, PCIe remains dominant in computing. PCIe is technically a serial protocol, as new versions of PCIe (7.0 is releasing soon) increase the serial transmission rate. However, PCIe is also parallel-wise scalable based on performance needs through "lanes", where one lane is a total of four wires, arranged as two differential pairs, with one pair for receiving (RX) and one for transmitting (TX).
PCIe scales up to 16 lanes, so a PCIe x16 interface will have 64 wires forming 32 differential pairs. When routing PCIe traces, the length of all differential pairs must be within <100 mils of each other (I believe; it's been about 10 years since I last read the spec). That's to address the "timing skew between lanes" you mention, and DRCs in the PCB design software will ensure the trace length skew requirement is respected.
>how can this be addressed in this massive parallel optical parallel interface?
From a hardware perspective, reserve a few "pixels" of the story's MicroLED transmitter array for link control, not for data transfer. Examples might be a clock or a data frame synchronization signal. From the software side, design a communication protocol which negotiates a stable connection between the endpoints and incorporates checksums.
Abstractly, the serial vs. parallel dynamic shifts as technology advances. Raising clock rates to shove more data down the line faster (serial improvement) works to a point, but you'll eventually hit the limits of your current technology. Still need more bandwidth? Just add more lines to meet your needs (parallel improvement). Eventually the technology improves, and the dynamic continues. A perfect example of that is PCIe.
I wonder if meta material might provide such nonlinearities in the future.