The reason graphene is great for transistors is with its higher band energy its 5x harder for it to soft set itself. So if we pretend todays 11nm transistors have a 1% chance of electron tunneling, carbon would have a 0.2%.
The problem is that same switch would take 5x as much power to switch. Which means a modern 220w cpu would now need 1100watts of power :x
Those are not the same thing. You can have 5x voltage without changing the power or the energy.
The band gap is not a function of the element alone, but mainly of the crystal structure. Different allotropes of carbon have vastly different band structures.
Diamond has a high band gap, as you mentioned. But Graphene has none, and that is one of the major obstacles of the material.
I should have said electron affinity rather than electronegativity: You're not trying to get electrons from the valence band into the vacuum, just from the conduction band to the vacuum. Diamond actually has a negative electron affinity, which is why the vacuum electronic folks were excited about it.
[1] http://www.decodedscience.com/helium-shortage-situation-upda...
But microprocessors are tiny. The i7 4770, which is in my machine right now, has a die size of 177mm^2. The die has a thickness of 775um, for a volume of ~137mm^3.
Even if you made the whole processor out of helium, you could make 100,000 of them out of a single 14 liter party balloon.
The Zeppelin NT, which holds 8225m^3 of helium, contains enough for 60 billion high end processors.
Oh, sure, there will be losses and such, but this is still a trivial expense next to the billions of dollars of fab work that will be required. In these quantities we literally use gold without hardly a second thought for price.
[1]: http://www.praxairdirect.com/Product2_10152_10051_14626_-1_1...
24k Gold is 42$/gram, 1 gram is 0.052 milliliters.
So, 42 / .052 * 1.5 = 1,211$.
Note: The important parts are not 5mm thick etc, but gold is rather expensive by volume.
I don't think the authors delved too hard into the history. I interviewed at General Dynamics in Ft Worth around 1986: They told us that they they just designed out the last of the tubes on the F16 and were working to remove the last tubes from the F4U Phantom.
Nobody mentioned that the tubes were around as a countermeasure to EMP; it was more about waiting for technologies to mature to the point that you'd believe they were battle-tested enough for your most advanced weaponry.
Edit: The above anecdote in reference to the article claiming
> By the mid-1970s, the only vacuum tubes you could find in Western electronics were hidden away in certain kinds of specialized equipment
I once heard that one of the reasons the Soviets continued to use analog systems was that they were "faster, more compact, and more power efficient" [than a digital computer]. This came at the cost of flexibility. For example, an op-amp allegedly can do integration faster and with less power than a digital computer, but the IC can't be reprogrammed. Digital computers have huge benefits, but ones that come at a cost.
Thoughts/input anyone? As I said, I might be completely off base so please nobody take that as anything more than "food for though".
Robustness of power electronics is another consideration: tubes are mechanically fragile but not vulnerable to ESD, whereas FETs are, especially during assembly. If their factory process control was poor it would have been easier to stick with the tubes.
There's a middle ground where an EMP would be strong enough to fry a bipolar transistor, but weak enough to fry a tube. I don't know how significative that is for military strategy, but the automatic answer of "transistors can't handle EMP" isn't completely right.
Checkout Indium Phosphide HEMTs (High electron mobility transistors): http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4419013
Mark Rodwell's group in UCSB (http://www.ece.ucsb.edu/Faculty/rodwell/rodwell_info/rodwell...) has been working on these transistors for a while. I think they're pushing 2 THz currently.
To add an example of a real technique: You can use a short laser pulse and change the time-of-flight (mirrored path on a stepper motor, for instance). This technique will get you to the terahertz region, which is pretty much state of the art for where electronic devices still have gain.
http://en.wikipedia.org/wiki/Terahertz_time-domain_spectrosc...
It's been a couple of decades since I last looked: The diamond thin-film guys thought that power would be a sensible application. There is a technological hurdle: If you notice in the diagram, the electrodes are pointed. This increases the field at the tip, in order to overcome the electric-potential barrier to emission of the electrons. Reliability issues arise because the emission is occurring over a smaller surface area.
Like I said, it's been a while since I last looked, but vacuum devices have had appeal for power apps for a long time.
I do absolutely think this could be awesome for lower power ultra tiny DC-DC converters though. For instance:
http://hexus.net/tech/news/psu/64161-finsix-laptop-power-sup...
is pushing to high enough frequencies so as to not need an inductor at all. Problem with high frequencies is that usually the efficiency drops, so there's a tradeoff. But if you can avoid the switching losses by moving to a transistor with higher operating frequencies, it might be quite good =)
Or the tiny size might work well for letting you do cool stuff like on-chip DC-DC conversion where you don't need an inductor because it's all so fast...
It would be nice, right? Most folks worry about getting vacuum electronics to simply work, so I couldn't dig up anything on the nonlinearities of a vacuum transistor amplifier. (For those who don't follow such things: The nonlinearities of guitar amps are intimately related to how it sounds. Many musicians still use tube amps because the world has become accustomed to that sound. It is the gold standard of distorted amplifiers to some of us.)
That said I'm not optimistic. A quick check of a Fender Twin Reverb schematic (http://support.fender.com/schematics/guitar_amplifiers/65_Tw...) shows that that the final amp has pentodes, different than the triode that the OP's article is talking about, and those pentodes have separated heaters for the cathode. So the temperature of the electrons coming off the cathode are going to be much hotter. (Another name for the monolithic vacuum devices used to be "cold cathode", because it acted like a thermionic emitter but without a heater.)
There's a lot that's different. Of course, the only way to know for sure is to plug it in and crank it up to 11.
You can get the correct vacuum tube-like amplification from a computer today, it's just that tubes are still cheaper. A/D converters and first stage linear amplifiers are only getting cheaper (even at this, post Moore's law era), thus it's only a matter of time before computers retire valves on yet another application.
EDIT: Also, tunnel devices have a completely different behavior from macroscopic valves. They are very non-linear, what makes them great at digital applications, but horrible sound amplifiers.
It is absolutely not the case that tube amplifiers are still sold because they're cheaper than solid state amplifiers + DSP. Their sounds is preferred and they are much, much more expensive to produce.
The valves (which don't enjoy the economies of scale they once did) are only part of the cost. The power supply of a tube amp is usually more complex than a solid state amp and they generally need output transformers because of high output impedance -- transformers that are linear through the audible band are expensive to produce.
At scale you could easily build a SOTA DSP card suitable for emulating "tube sound" for less than the cost of a single channel output transformer. Line 6, among others, have built businesses based on that fact.