http://phm.cba.mit.edu/theses/03.07.vigoda.pdf
edit: p 135 is where he starts talking about implementation in silicon
I suppose they could use a "non-linearizer" to put more of the precision near 0 and 1, but it would come at the expense of precision in the middle. The less voltage swing is involved, the more susceptible you are to noise from various sources.
The Lyric web site says that they "model relationships between probabilities natively in the device physics", where D. E. Shaw's Anton chip sounds like it uses traditional logic gates the same way a GPU does.
P.S. Sorry, I downvoted you by accident -- I meant to upvote you.
The fundamental problem as I see it is that any domain-specific chip will receive a tiny fraction of R&D and economies of scale and amortization that a general purpose one will, and so its advantage is only temporary. As long as Moore's law is operating, this will be true.
> In practice replacing digital computers with an alternative computing paradigm is a risky proposition. Alternative computing architectures, such as parallel digital computers have not tended to be commercially viable, because Moore’s Law has consistently enabled conventional von Neumann architectures to render alternatives unnecessary. Besides Moore’s Law, digital computing also benefits from mature tools and expertise for optimizing performance at all levels of the system: process technology, fundamental circuits, layout and algorithms. Many engineers are simultaneously working to improve every aspect of digital technology, while alternative technologies like analog computing do not have the same kind of industry juggernaut pushing them forward.
While Lyric may incorporate classic gates in their design, it also sounds like the heart of their technology uses something different from classic gates.
Not saying it's a bad idea... I'm really for the idea of revisiting assumptions in computer design.
That said, I'm more excited about the use of Lyric's technology in ECC memory. I'm skimming through Vigoda's thesis, and it seems that another very interesting application ought to be making even lower-power mobile backend chips.
That's a turbo decoder rather than a generic probability calculator, but it's doing probability calculations in the analog domain.
This sort of thing may make sense for error correction, but I don't think people will run general probability calculations on it. Too difficult to debug :-)
Though, I do wonder if they can simulate a neuron more efficiently than digital logic.
http://www.technologyreview.com/printer_friendly_article.asp...