GE has a paper about the power conversion design, but it doesn't mention the unit to rack electrical and mechanical interface. Liteon is working on that, but the animation is rather vague.[2] They hint at hot plugging but hand-wave how the disconnects work. Delta offers a few more hints.[3] There's a complex hot-plugging control unit to avoid inrush currents on plug-in and arcing on disconnect. This requires active management of the switching silicon carbide MOSFETs.
There ought to be a mechanical disconnect behind this, so that when someone pulls out a rackmount unit, a shutter drops behind it to protect people from 800V. All these papers are kind of hand-wavey about how the electrical safety works.
Plus, all this is liquid-cooled, and that has to hot-plug, too.
[1] https://library.grid.gevernova.com/white-papers-case-studies...
[2] https://www.youtube.com/watch?v=CQOreYMhe-M&
[3] https://filecenter.deltaww.com/Products/download/2510/202510...
> When it is detected that the PDB starts to detach from the interface, the hot-swap controller quickly turns off the MOSFET to block the discharge path from Cin to the system. After the main power path is completely disconnected, the interface is physically detached, and no current flows at this time
> For insertion, long pins (typically for ground and control signals) make contact first to establish a stable reference and enable pre-insertion checks, while short pins (for power or sensitive signals) connect later once conditions are safe; during removal, the sequence is reversed, with short pins disconnecting first to minimize interference.
With that sort of voltage you should be able to use a capacitive or inductive sensor to activate a relay.
Then I started routing ethernet with PoE throughout my house and observed that other than a few large appliances, the majority of powered devices in a typical home in 2026 could be supplied via PoE DC current as well! Lighting, laptops, small/medium televisions. The current PoE spec allows up to 100 W, which covers like 80% of the powered devices in most homes. I think it would make more sense to have fewer AC outlets around the modern house and many more terminals for PoE instead (maybe with a more robust connector than RJ45). I wonder what sort of energy efficiency improvements this would yield. No more power bricks all over the place either.
We installed 120 LED ceiling lights in our home circa 2020, all of which were run with high voltage (romex) and accompanied by 120 little transformer boxes that mount inside the ceiling next to them.
Later ...
We installed outdoor lighting with low voltage, outdoor rated wiring and powered by a 12V transformer[1] and I felt the same way you did: why did we use a mile of romex and install all of those little mini transformers when we could have powered the same lights with 12V and low voltage wire ?
I then learned that the energy draw of running the low-volt transformer all the time - especially one large enough to supply an entire house of lighting - would more than cancel out energy savings from powering lower voltage fixtures.
You don't have this problem with outdoor lighting because the entire transformer is on a switch leg and is off most of the time.
So ... I like the idea of removing a lot of unnecessary high voltage wire but it's not as simple as "just put all of your lights behind a transformer".
[1] https://residential.vistapro.com/lex-cms/product/262396-es-s...
With double-conversion, generally yes.
I recently ran across the (patented?) concept of a delta conversion/transformer UPS that seems to eliminate/reduce the inefficiencies:
* https://dc.mynetworkinsights.com/what-are-the-different-type...
* a bit technical: https://www.youtube.com/watch?v=nn_ydJemqCk
* Figures 6 to 8 [pdf]: https://www.totalpowersolutions.ie/wp-content/uploads/WP1-Di...
The double-conversion only occurs when there's a 'hiccup' from utility power, otherwise if power is clean the double-conversion is not done at all so the inefficiencies don't kick in.
I find it a little hard to imagine that those devices outnumber things like stoves, dishwashers, washers/dryers, kettles, hair dryers... by 4:1.
Unsure why PoE would be better for LED lighting than the standard approach of screwing a bulb directly into AC, either. How many lumens do you get out of strip lights these days? And you still have AC-DC conversion for whatever's sourcing power onto the Ethernet link.
USB-C could be that connector, using USB-PD instead of PoE. Though I'm not sure I'd want to need that much smarts for every single power outlet.
Efficiency isn't as straightforward either. You're still being fed by 120V/230V AC, so you're going to need some kind of centralized rectifier and down converter. It'll need to be specced for peak use, but in practice it'll usually operate at a fraction of that load - which means it'll have a pretty poor efficiency. A per-device PSU can be designed exactly for the expected load, which means it'll operate at its peak efficiency.
We also don't use 5V DC grids because the wire losses would be horrible, so a domestic DC grid should probably operate at pretty close to regular AC voltage as well. In practice this means the most sensible option would be to have a centralized rectifier and a grid operating at whatever voltage it outputs - but what would be the point?
As to PoE: I personally really like the idea, but I don't believe it'll have a bright future. For its traditional use the main issue is that there doesn't seem to be a future for twisted-pair beyond 10Gbps. 25GBASE-T might exist as a standard on paper, but the hardware never took off due to complete disinterest from the datacenter market, and it is too limited to be of use in offices and homes. I fully expect that 25G will arrive in the home and office as some form of fiber-optic interconnect - with fiber+copper hybrid for things like access points.
On the other hand, for a lot of IoT applications PoE seems to be too complicated and too expensive. It makes sense for things like cameras, but individual lights, or things like smoke sensors are probably better served in office/industrial applications by either a regular AC supply or a local DC one, plus something like KNX, X10, CAN, or Modbus for comms: just being able to be wired as a bus rather than a star topology is already a massive advantage. And for domestic use the whole "has a wire" thing is of course a massive drawback - most consumers strongly prefer using Wifi over running a dedicated wire to every single little doodad.
I think it's highly unlikely we'll see mass scale retrofits, but if enough momentum builds up, I can see it as a great bonus feature for new builds.
I got lucky with my house and every room has a dedicated phone line meeting at a distribution panel (a couple of 2x4s with screw terminals) built in the 50s. I'm in the process of converting it to light duty DC power. The wiring is only good for an amp or two, but at 48v that's still significant power transmission.
It's super nice because you only need to put the UPS/ATS at the PoE switch and then you get power redundancy everywhere you have ethernet running (i.e. the phones don't go down).
1. One of these is simplicity. With AC, one single home run of cabling (eg, Romex) can feed a whole room full of stuff, like a bedroom or a living room. At one end of the run is a circuit breaker (a fairly simple electromechanical device) and at the other end is a series of outlets (which are physically daisy-chained, but are functionally just wired in parallel with eachother).
Since one single run of cable can feed many devices, it is easy to accomplish.
2. Another advantage is that it is universal. Anything can plug into these outlets. Whatever a person brings into the home to use, they can plug it into an outlet and it works. It works this same way in every home.
3. And there's quite a lot of power available: A common 20A 120v branch circuit cabled up with 12AWG Romex is stated to supply up to 16A continuously, or 1920W. For intermittent loads, it can supply 20A -- or 2400W. That's tiny by European standards, but it's still quite a lot of power. It's plenty to run a space heater when Grandma visits and she complains about the guest room being cold (even as you start to sweat when you cross the threshold to investigate) and a big TV and a whole world of table lamps, all at once. And you can plug this stuff into any outlets in a room, and it Just Works.
4. But, sure: Lots of devices want DC, not AC. So there's a necessary conversion step that is either integral to the device being plugged in, or in the form of the external wall warts we all know very well.
So let's compare to power-over-ethernet.
1. It's also simple, but only tangentially-so. One home-run cable per outlet, whether that outlet is used or not, is something that can be rationalized as being a simple topology. A PoE switch at the head-end instead of a central box with circuit breakers is a simple-enough thing to transition to. And a lot more individual cables are required, but they're relatively small and are generally easier to install.
2. It's standardized, but it's not universal at all. I've got a few PoE widgets around the house, but I'm pretty friggin' weird when it comes to what I do with electricity. I can't go to Wal-Mart and buy more PoE widgets to use at home, and when people visit they aren't bringing PoE adapters to charge their phones and other electronics. My computer monitor doesn't have a PoE input. I can easily imagine a table lamp or a fan that connects to PoE, and also uses it as a network connection for automation, and that sounds pretty sweet in ways that tickle my automation bones in the most filthy of fashions... but that's getting even further into the weeds compared to how regular people expect to do regular things.
3. There isn't a lot of power available. 802.3bt Type 4 is the highest spec. And within that spec: While switch ports can output up to 100W, a device being powered is limited drawing no more than 71.3W. Now, sure, that's 71.3W per port, but in a room with 10 ports that's still only ~700W -- at most -- in that room. And Grandma's space heater won't run on 71.3W, nor her electric blanket. My laptop wants more than this. The list of useful, portable things that we casually plug into a wall that only draw less than 71.3W is pretty short and most don't benefit from the main advantage of PoE, which is a combination of [some] power alongside high-speed Ethernet data.
4. We still need wall warts since PoE is nominally ~48VDC. For example: Phones use less than 71.3W while charging, but they don't run on 48V. That means 120V AC comes in from the grid, gets shifted to 48VDC for distribution within the dwelling, and then gets shifted yet again to the produce the power (5, 9, 15, and 20V are common-enough in USB PD world) that devices actually want. That's more lossy conversion steps, not fewer -- and we still get to keep the extra conversion (wall warts) as punishment for our great ideas. This is not the path towards increased energy efficiency.
---
PoE is great for the things we use it for today. A camera, a wireless access point -- you know, fixed-location stuff that uses networked data as its primary function and also requires power.
Installed PoE light fixtures (like, say, task lights in a kitchen) also sounds neat -- unless they die prematurely and no PoE replacements are to be found. (Now, you have not just one or two problems, but many: The lights aren't working in that space and they can't be replaced with a trip to Lowes because the Romex that would normally have been installed was deliberately deleted from the plan. It could have been a 20-minute DIY fix that costs less than $100, but now it involves drywall and paint and retrofitting new cabling. Or maybe PoE replacements do exist, but it's now 2035 and the new ones don't talk the same network protocols as the old ones did.)
But there are other upsides: I've got an 8-port PoE-powered network switch that works a treat. It's a dandy little thing. And it sure would be neat to plug my streaming box in with PoE and kill two birds with one cable; I would like that very much.
But most people? Most people don't give a damn about ethernet (PoE, or not!) these days, or streaming boxes, and that trend is increasing. They just plug their lamp into the regular outlet on the wall like they always have, and deal with whatever terrible UI is built into their smart TV, and use wifi for anything that needs data.
And when they buy a home that is filled with someone else's smart infrastucture, their first task (more often than not) is to figure out who to call to erase those parts completely and put it back to being normal and boring.
See e.g. https://www.dell.com/support/kbdoc/en-us/000221234/wiring-in...
But what about availability? If you ask most of our users whether they’d prefer 4 9s of availability or 10% more money to spend on CPUs, they choose the CPUs. We asked them.
There are a lot of availability-insensitive workloads in the commercial world, as well, like AI training. What matters in those cases is how much computing you get done by the end of the month, and for a fixed budget a UPS reduces this number.
And then every machine has a switching power supply to convert this to low-voltage DC, and then probably random point-of-load converters in various places (DC -> AC -> DC again) for stuff like the CPU / GPU core, RAM, etc. Each of these stages may be ~95% efficient with optimal load, but the losses add up, and get a lot worse outside a narrow envelope.
Yes, of course both of those things are true, and yes, some data centers do engage in those processes for their unique advantages. The issue is that aside from specialty kit designed for that use (like the AWS Outposts with their DC conversion), the rank-and-file kit is still predominantly AC-driven, and that doesn't seem to be changing just yet.
While I'd love to see more DC-flavored kit accessible to the mainstream, it's a chicken-and-egg problem that neither the power vendors (APC, Eaton, etc) or the kit makers (Dell, Cisco, HP, Supermicro, etc) seem to want to take the plunge on first. Until then, this remains a niche-feature for niche-users deal, I wager.
https://www.nokia.com/bell-labs/publications-and-media/publi...
https://developer.nvidia.com/blog/nvidia-800-v-hvdc-architec...
https://blogs.nvidia.com/blog/gigawatt-ai-factories-ocp-vera...
almost everybody in the industry is embracing 800V DC mostly because of Vera Rubin and the increased electricity requirements.
Its much cheaper, quicker and easier to use cooling blocks with leak proof quick connectors to do liquid cooling. It means you can use normal equipment, and don't need to re-re-enforce the floor.
A lot of "edge" stuff has 12/48v screw terminals, which I suspect is because they are designed to be telco compatible.
For megawatt racks though, I'm still not really sure.
DC doesn't have such a killer. There are a decent bunch of benefits, and the main drawback is gear availability. However, the chicken-and-egg problem is being solved by hyperscalers. Like it or not, the rank-and-file of small & medium businesses is dying, and massive deployments like AWS/GCP/Azure/Meta are becoming the norm. Those four already account for 44% of data center capacity! If they switch to DC can you still call it "specialty kit", or would it perhaps be more accurate to call it "industry norm"?
It is becoming increasingly obvious that the rest of the industry is essentially getting Big Tech's leftovers. I wouldn't be surprised if DC became the norm for colocation over the next few decades.
[0]: https://thecoolingreport.com/intel/pfas-two-phase-immersion-...
Looking at the manual for the first server line that came to mind, you can buy a Dell PowerEdge R730 today with a first party support DC power supply.
Many datacenters I'd been to at that point were already DC.
Didn't think this was that new of a trend in 2026, but also acknowledge I did not visit more than a handful of datacenters since 2007.
It just seemed like a undenyably logical thing to do.
800 volts DC, at the megawatt power supply levels, implies fault impulses of more than a megajoule. Google tells me that's about 2 hand grenades worth of boom. That's an optimistic lower bound.
The resulting copper plasma cloud is a burn and inhalation hazard, along with the overpressure.
Let's say you get a 10 kiloamp fault current, this will then induce voltages everywhere you don't want it to go. If all the interconnects are fiber, that's really not a problem, but you have to have everything EMP shielded if you don't want boards popping randomly after such an event.
The "efficiency" of removing the extra power conversions also removes filtering and surge suppression. It's entirely possible that one power supply over-voltage takes out half of your racks. The MOSFETs used tend to fail closed instead of open, making failures far worse than a simple outage.
Very smart people are making very smart mistakes.
However, higher DC voltage is riskier, and it's not at all standard for electrical and building code reasons. In particular, breaking DC circuits is more difficult because there's no zero-crossing point to naturally extinguish an arc, and 170V (US/120VAC) or 340V (Europe/240VAC) is enough to start a substantial arc under the right circumstances.
Unfortunately for your lighting, it's also both simple and efficient to stack enough LEDs together such that their forward voltage drop is approximately the rectified peak (i.e. targeting that 170/340V peak). That means that the bulb needs only one serial string of LEDs without parallel balancing, making the rest of the circuitry (including voltage regulation, which would still be necessary in DC world) simpler.
The part that would genuinely be cheaper is avoiding problematic flicker. It takes a reasonably high quality LED driver to avoid 120Hz flicker, but a DC-supplied driver could be simpler and cheaper.
IEEE 802.3bt can deliver up to 71W at the destination: just pull Cat 5/6 everywhere.
* https://en.wikipedia.org/wiki/Power_over_Ethernet#Standard_i...
The gain from DC-DC converters is small and DC devices are small part of usage compared appliances. There is no way will pay back costs of replacing all the appliances.
It is silly to have AC to DC converters in all of my wall connected electronics ( LED bulbs, home controller, computer equipment etc )
You could wire your house for 12, 24 or 48V DC tomorrow and some off-grid dwellers have done just that. But since inverters have become cheap enough such installations are becoming more and more rare. The only place where you still see that is in cars, trucks and vessels.
And if you thought cooking water in a camper on an inverter is tricky wait until you start running things like washing machines and other large appliances off low voltage DC. You'll be using massive cables the cost of which will outweigh any savings.
I end up converting stuff anyhow, because all my loads run at different voltages- even though I had my lights, vent fan, and heater fans running on 12V I still ended up having to change voltages for most of the loads I wanted to run, or generate a AC to to charge my computer and run a rice cooker.
Not to mention that running anything that draws any real power quickly needs a much thicker wire at 12V. So you're either needing to run higher voltage DC than all your loads for distribution and then lowering the voltage when it gets to the device, or you simply can't draw much power.
Not that you can't have higher voltage DC; with my newer system the line from my solar panels to my charger controller is around 350VDC and I can use 10awg for that... but none of the loads I own that draw much power (saws, instapot, rice cooker, hammond organ, tube guitar amp) take DC :D
Thus, even if you had DC in the walls, it would be 100+ volts, and you'd still have conversion down to the lower voltages that electronics use. If you look at the comments in this thread from people who work in telco, they talk about how voltage enters equipment at -48V and is then further lowered.
We have some old ceiling and exhaust fans, but I know those can be replaced. Our refrigerator is AC, but extended family with an off-grid home has a DC refrigerator that cycles way less, probably due to multiple design factors but I’m sure the lack of transformer heat is part of it. I’m not as sure about laundry machine or oven/cooktop options but I believe those are also running on DC in the off-grid home without inverters.
Most of these AC appliances also have transformers in them anyway for the control boards. It seems kind of insane to me that we are still doing things this way.
A DC household would have to choose a trade-off between multiple lines with different voltages or fewer voltages that need to be adapted to the appliances. And we're right back at the AC situation, but worse since DC voltages are more difficult to change.
But consumers like datacenters can very well plan ahead and standardize on a single DC voltage. They already need beefy equipment to deal with interruptions, power sourges, non-sinus components, and brownouts, which already involves transformers, condensators, and DC conversion for battery storage. Therefore almost no additional equipment is required.
AC motors are using way more power than the puddly control boards in most home appliances. So you lose a little efficiency on conversion but being 80% efficient doesn’t matter much when it’s 1-5% of the devices energy budget. You generally gain way more than that from similarly priced AC motors being more efficient.
Installing a ceiling fan used to be treacherous and so heavy. Also loud and buzzy after installed. Now the fans in these things are so lightweight and easy.
seeing the same in many more areas (lighting, etc)
The irony is all the recessed lights I picked out are DC, they all have little AC-DC boxes hanging off them using a proprietary connector. If I hadn't needed to pass a rough-in inspection going all DC would've been trivial.
For 800V DC, a simple UPS could interface with the main supply using just a pair of (large) diodes, and a more complex and more efficient one could use some fancy solid state switches, but there’s no need for anything as complex as a line-interactive AC UPS.
Hard as a rock!
Well it's harder than a rock!If your house gets 800V DC you're still gonna need "bricks" to convert that to 5VDC of 12VDC (or maybe 19VDC) that most of the things that currently have "bricks" need.
And if your house gets lower voltage DC, you're gonna have the problem of worth-stealing sized wiring to run your stove, water heater, or car charger.
I reckon it'd be nice to have USB C PD ports everywhere I have a 220VAC power point, but 5 years ago that'd have been a USB type A port - and even now those'd be getting close to useless. We use a Type I (AS/NZS 2112) power point plug here - and that hasn't needed to change in probably a century. I doubt there's ever been a low voltage DC plug/socket standard that's lasted in use for anything like that long - probably the old "car cigarette lighter" 12DC thing? I'm glad I don't have a house full of those.
My understanding is that DC breakers are somewhat prone to fires for this reason, too.
(My stand mixer is the lone sad exception)
Once you get into higher power (laptops and up), switching and distribution get harder, so the advantages fade.
For bigger appliances (fridge, etc), AC is fine + practical.
However, there's also PoE (24 or 48V!), so maybe that's the right approach. It's not like each outlet is going to run a heater anyway.
I always thought AC’s primary benefit was its transmission efficiency??
Would love to learn if anyone knows more about this
The transmission efficiency of AC comes from the fact that you can pretty trivially make a 1 megavolt AC line. The higher the voltage, the lower the current has to be to provide the same amount of power. And lower current means less power in line loss due to how electricity be.
But that really is the only advantage of AC. DC at the same voltage as AC will ultimately be more efficient, especially if it's humid or the line is underwater. Due to how electricy be, a change in the current of a line will induce a current into conductive materials. A portion of AC power is being drained simply by the fact that the current on the line is constantly alternating. DC doesn't alternate, so it doesn't ever lose power from that alternation.
Another key benefit of DC is can work to bridge grids. The thing causing a problem with grids being interconnected is entirely due to the nature of AC power. AC has a frequency and a phase. If two grids don't share a frequency (happens in the EU) or a phase (happens everywhere, particularly the grids in the US) they cannot be connected. Otherwise the power generators end up fighting each other rather than providing power to a load.
In short, AC won because it it was cheap and easy to make high voltage AC. DC is comming back because it's only somewhat recently been affordable to make similar transformations on DC from High to low and low to high voltages. DC carries further benefits that AC does not.
BTW, megavolt DC DC converters are a sign to behold: https://en.wikipedia.org/wiki/File:Pole_2_Thyristor_Valve.jp...
There are many factors involved, and "efficiency" is only one. Cost is the real driver, as with everything.
AC is effective when you need to step down frequently. Think transformers on poles everywhere. Stepping down AC using transformers means you can use smaller, cheaper conductors to get from high voltage transmission, lower voltage distribution and, finally lower voltage consumers. Without this, you need massive conductors and/or high voltages and all the costs that go with them.
AC is less effective, for instance, when transmitting high power over long, uninterrupted distances or feeding high density DC loads. Here, the reactive[1] power penalty of AC begins to dominate. This is a far less common problem, and so "Tesla won" is the widely held mental shortcut. Physics doesn't care, however; the DC case remains and is applied when necessary to reduce cost.
- Three conductors vs two, but they can be the next gauge up since the current flows on three conductors
- no significant skin effect at 400Hz -> use speaker wire, lol.
- large voltage/current DC brakers are.. gnarly, and expensive. DC does not like to stop flowing
- The 400Hz distribution industry is massive; the entire aerospace industry runs on it. No need for niche or custom parts.
- 3 phase @ 400Hz is x6 = 2.4kHz. Six diodes will rectify it with almost no relevant amount of ripple (Vmin is 87% of Vmax) and very small caps will smooth it.
As an aside, with three (or more) phase you can use multi-tap transformers and get an arbitrary number of poles. 7 phases at 400Hz -> 5.6kHz. Your PSU is now 14 diodes and a ceramic cap.
- you still get to use step up/down transformers, but at 400Hz they're very small.
- merging power sources is a lot easier (but for the phase angle)
- DC-DC converters are great, but you're not going to beat a transformer in efficiency or reliability
now run that unshielded wire 50 meters past racks of GPUs and enjoy your EMI
> The 400Hz distribution industry is massive; the entire aerospace industry runs on it
nothing in that catalog is rated for 100kW–1MW rack loads at 800Vrms
> 3 phase @ 400Hz is x6 = 2.4kHz... Your PSU is now 14 diodes and a ceramic cap
you still need an inverter-based UPS upstream, which is the exact conversion stage DC eliminates
> large voltage/current DC breakers are.. gnarly, and expensive. DC does not like to stop flowing
SiC solid-state DC breakers are shipping today from every major vendor
> DC-DC converters are great, but you're not going to beat a transformer in efficiency or reliability
wide-bandgap converters are at 95%+ with no moving parts
The skin depth by the way is sqrt(2 1.7e-8 ohm m / (2 pi 400Hz mu0))=~3mm for copper---OK for single rack, but starts to be significant for the type of bus bars that an aisle of racks might want.
As for efficiency, both 400Hz transformers AND fancy DC-DC converters are around 95% efficient, except that AC requires electronics to rectify it to DC, losing another few percent, so the slight advantage goes to DC, actually.
As for merging power, remember that DC DC converter uses an internal AC stage, so it's the same---you can have multiple primary windings, just like for plain AC.
What are you talking about? There's a very significant skin effect at 400Hz. Skin effect goes up with frequency. These datacenters use copper busbars, not cable, so skin effect is an important consideration.
Other people, of course, have other definitions of high voltage:
"This resonant tower is known as a Tesla coil. This particular one is just over 17 feet tall and it can generate about a million volts at 60,000 cycles per second."
and:
"This pulse forming network can deliver a shaped pulse of over 50,000 amps with a total energy of about 1,057 times the tower primary energy"
If there was anything like a high power transistor back then he would have used that. High power transistors that are robust enough to handle the grid were designed inly recently over 100 years after the tesla/edison ac/dc argument.
This!
The soon people realized these facts the better. The pervasive high rise buildings did not happen before the invention of modern cranes.
Exactly twenty years ago I was doing a novel research on GaN characterization, and my supervisors made a lot money with consulations around the world, and succesfully founded govt funded start-up company around the technology. Together with SiC, these are the two game changing power devices with wideband semiconductor technology that only maturing recently.
Heck, even the Nobel price winning blue LED discovery was only made feasible by GaN. Watch the excellent video made by Veritasium for this back story [1].
[1] Why It Was Almost Impossible to Make the Blue LED:
the podcaster Sebastian Major from "Our Fake History" did a looonnngg patreon episode on tesla and debunked most of the weird myths around tesla. Sebastian doesn't have a vendetta or anything, it's just amazing how much of the Tesla stuff is just nonsense or is viewed through a very weird bias nowadays. Major also briefly touches on the weird Edison stuff and how the internet has twisted Edison into a villain.
I only found Edison in the headline, I didn't find it anywhere in the body, nor did I find Tesla. Glancing through the article it almost seems like someone tried to make a catchy headline to get clicks.
You can have the best idea in the world, but if you cant manufacture it you're SOL.
Mercury arc rectifiers were used long before his death.
can we stop vibe generating headlines?