>
They were fielded as replicated networked real-time load-sharing multi-processors in 1965! Why didn't the Apollo program draw on that experience? [...] Why do we ignore the real pioneers?I think early computer history in general worked this way. Communication was slow before the Internet. Computer engineering largely existed as an art to build isolated one-off systems. Basic concepts were painstaking reinvented and re-engineered all the time under different circumstances. Lack of standard architectures made transfer of concepts difficult. And in the end, many computer designs die in isolation and obscurity, whose architectures left no influence, and no heritage can be found in later computers. The game only changed after minicomputers, which turned computers into mass-market products. Microprocessors did it to a greater degree.
NSA's SIGSALY system, built in late World War 2 that already pioneered digital-to-analog conversion, digital signal processing, digital PCM audio, speech compression codec, and cryptography. But I doubt later DSP engineers ever heard of this project.
Or consider Multics, a relatively late and well-known system. Today it would be called a "cloud-computing" operation system, designed for a world where computers are public utilities to provide time-sharing service to the public, in a way similar to telephone or electricity. Its operating system designed for strong reliability and security. But its architecture was all but forgotten (the hierarchical file system survived in Unix, but it was a minor feature compared to Multics real achievements - and I suspect it's not the firs time the hierarchic file system was invented). Why didn't later computers reuse its design and concepts? For starter, Multics was designed to on a modified GE-600 mainframe with customized hardware. Nobody else in the world has the same machine.
It's why it's crucial to perverse any remaining historical materials. Otherwise we won't even know even their existence.