Analog computers don't get the respect they deserve. There's one more computer, the FCC. The Flight Control Computer is an analog computer in the Saturn V that controlled the rocket gimbals. It's a two-foot cylinder weighing almost 100 pounds.
the only reason they have the same name is that they were both originally built to replace people cranking out calculations on mechanical desk calculators, who were also called 'computers'
the flight control 'computer' has more in common with an analog synthesizer module than it does with a cray-1, the agc, an arduino, this laptop, or these chargers, which are by comparison almost indistinguishable
ENIAC, for example, was not a stored-program computer. Reprogramming required rewiring the machine.
On the other hand, by clever use of arithmetic calculations, https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.37.... says the Z3 could perform as a Universal Computer, even though, quoting its Wikipedia page, "because it lacked conditional branching, the Z3 only meets this definition by speculatively computing all possible outcomes of a calculation."
Which makes me think the old punched card mechanical tabulators could also be rigged up as a universal machine, were someone clever enough.
"Surprisingly Turing-Complete" or "Accidentally Turing Complete" is a thing, after all, and https://gwern.net/turing-complete includes a bunch of them.
If we could optimize a set of programs down to the FPGA bitstream or even Verilog level, that would approach the kind of programs analog computers run.
I can't tell anything about Turing completeness though. It's a fully discrete concept, and analog computers operate in the continuous signal domain.
Turing completeness is a tar pit that makes your code hard to analyse and optimise. It's an interesting challenge to find languages that allow meaningful and useful computation that are not Turing complete. Regular expressions and SQL-style relational algebra (but not Perl-style regular expressions nor most real-world SQL dialects) are examples familiar to many programmers.
Programming languages like Agda and Idris that require that you prove that your programs terminate [0] are another interesting example, less familiar to people.
[0] It's slightly more sophisticated than this: you can also write event-loops that go on forever, but you have to prove that your program does some new IO after a finite amount of time. (Everything oversimplified here.)
> The Flight Control Computer (FCC) was an entirely analog signal processing device, using relays controlled by the Saturn V Switch Selector Unit to manage internal redundancy and filter bank selection. The FCC contained multiple redundant signal processing paths in a triplex configuration that could switch to a standby channel in the event of a primary channel comparison failure. The flight control computer implemented basic proportional-derivative feedback for thrust vector control during powered flight, and also contained phase plane logic for control of the S-IVB auxiliary propulsion system (APS).
> For powered flight, the FCC implemented the control law $ \beta_c = a_0 H_0(s) \theta_e + a_1 H_1(s) \dot{\theta} $ where $ a_0 $ and $ a_1 $ are the proportional and derivative gains, and $ H_0(s) $ are the continuous-time transfer functions of the attitude and attitude rate channel structural bending filters, respectively. In the Saturn V configuration, the gains $ a_0 $ and $ a_1 $ were not scheduled; a discrete gain switch occurred. The Saturn V FCC also implemented an electronic thrust vector cant functionality using a ramp generator that vectored the S-IC engines outboard approximately 2 degrees beginning at 20 seconds following liftoff, in order to mitigate thrust vector misalignment sensitivity.
https://ntrs.nasa.gov/api/citations/20200002830/downloads/20...
Who said women can't do math?
https://www.smithsonianmag.com/science-nature/history-human-...
Asimov.
https://literature.stackexchange.com/questions/25852/where-d...
The software (and hardware) of the Apollo missions was very well-engineered. We all know computation became ridiculously more powerful in the meantime, but that wouldn't make it easy to do the same nowadays. More performance doesn't render the need for good engineering obsolete (even though some seem to heavily lean on that premise).
What I find more interesting is to compare how complicated the tech we don’t think about has become. It’s amazing that a cable, not a smart device or even 80s digital watch, but a literal cable, has as much technology packed into it as Apollo 11 and we don’t even notice.
Playing devils advocate for your comment, one of the (admittedly many) reasons going to the moon is harder than charging a USB device is because there are not off-the-shelf parts for space travel. If you had to build your USB charger from scratch (including defining the USB specification for the first time) each time you needed to charge your phone, I bet people would quickly talk about USB cables as a “hard problem” too.
That is the biggest takeaway we should get from articles like this. Not that Apollo 11 wasn’t a hugely impressive feat of engineering. But that there is an enormous amount of engineering in our every day lives that is mass produced and we don’t even notice.
A simple looking object, but in reality it had a lot o tought put in to get to this form.
It also goes along the lines of "Simplicity is complicated"[1].
[0] - https://www.youtube.com/watch?v=XwUkbGHFAhs
[1] - https://go.dev/talks/2015/simplicity-is-complicated.slide#1
Otherwise, I completely agree.
I think this is the whole point of articles like this. I don’t think it’s sensationalist at all to compare older tech with newer and discuss how engineers did more with less.
Sure they had and often still have, it's called wetware.
>so computation power is not the single deciding factor on the performance and success of such an endeavor
The endeavor to charge a phone?
If I recall correctly this was one of the areas being explored by the mars drone although not sure if Mars surface radiation concerns are different than what you would use in space.
> The flight software is written in C/C++ and runs in the x86 environment. For each calculation/decision, the "flight string" compares the results from both cores. If there is a inconsistency, the string is bad and doesn't send any commands. If both cores return the same response, the string sends the command to the various microcontrollers on the rocket that control things like the engines and grid fins.
Water less so.
This is a very minor point... but three of something isn't triple redundancy: it's double redundancy. Two is single redundancy, one is no redundancy.
Unless the voting mechanism can somehow produce a correct answer from differing answers from all three implementations of the logic, I don't understand how it could be considered triply redundant. Is the voting mechanism itself functionally a fourth implementation?
The LVDC was a highly redundant can not fail design. the AGC had no redundancy and was designed to recover quickly if failure occurred.
Today it's even more the case. You have fully programmable CPUs in your keyboard, trackpad, mouse, all usb devices, etc.
Wild to think the thing that charges my devices could be programmed to put a human on the moon
Plenty of engineers on the ground had no computers, and the privileged ones who did had mainframes, not personal at all.
A computer was too valuable to be employed doing anything that didn't absolutely need a computer, most useful for precision or speed of calculation.
But look what happens when you give something like a mainframe to somebody who is naturally good at aerospace when using a slide rule to begin with.
With a large enough lithium battery, a charger can easily take you part of the way there.
https://www.quora.com/Could-the-Apollo-Guidance-Computer-hav...:
“P64. At about 7,000 feet altitude (a point known as “high gate”), the computer switched automatically to P64. The computer was still doing all the flying, and steered the LM toward its landing target. However, the Commander could look at the landing site, and if he didn’t like it, could pick a different target and the computer would alter its course and steer toward that target.
At this point, they were to use one of three programs to complete the landing:
P66. This was the program that was actually used for all six lunar landings. A few hundred feet above the surface the Commander told the computer to switch to P66. This is what was commonly known as “manual mode”, although it wasn’t really. In this mode, the Commander steered the LM by telling the computer what he wanted to do, and the computer made it happen. This continued through landing.
P65. Here’s the automatic mode you asked about. If the computer remained in P64 until it was about 150 feet above the surface, then the computer automatically switched to P65, which took the LM all the way to the surface under computer control. The problem is that the computer had no way to look for obstacles or tell how level its target landing site was. On every flight, the Commander wanted to choose a different spot than where the computer was taking the LM, and so the Commander switched to P66 before the computer automatically switched to P65. [Update: The code for P65 was removed from the AGC on later flights. The programmers needed memory for additional code elsewhere, and the AGC was so memory-constrained that adding code one place meant removing something else. By that point it was obvious that none of the crews was ever going to use the automatic landing mode, so P65 was removed.]
P67. This is full-on honest-to-goodness manual mode. In P66, even though the pilot is steering, the computer is still in the loop. In P67, the computer is totally disengaged. It is still providing data, such as altitude and descent rate, but has no control over the vehicle.”
I didn't know that was just for the LVDC.
> emulate this voting scheme with 3x microcontrollers with a 4th to tally votes will not make the system any more reliable
I think that's clear enough; the vote-tallier becomes a SPOF. I'm not sure how Tandem and Stratus handled discrepancies between their (twin) processors. Stratus used a pair of OTC 68K processors, which doesn't seem to mean voting; I can't see how you'd resolve a disagreement between just two voters.
I can't see how you make a voting-based "reliable" processor from OTC CPU chips; I imagine it would require each CPU to observe the outputs of the other two, and tell itself to stop voting if it loses a ballot. Which sounds to me like custom CPU hardware.
Any external hardware for comparing votes, telling a CPU to stop voting, and routing the vote-winning output, amounts to a vote-tallier, which is a SPOF. You could have three vote-talliers, checking up on one-another; but then you'd need a vote-tallier-tallier. It's turtles from then on down.
In general, having multiple CPUs voting as a way of improving reliability seems fraught, because it increases complexity, which reduces reliability.
Maybe making reliable processors amounts to just making processors that you can rely on.
Tell them both to run the calculation again, perhaps?
Apollo 11 Guidance Computer vs. USB-C Chargers - https://news.ycombinator.com/item?id=22254719 - Feb 2020 (205 comments)
> IBM estimated in 1996 that one error per month per 256 MiB of RAM was expected for a desktop computer.
https://web.archive.org/web/20111202020146/https://www.newsc...
It will. Not too long and not very reliable, but will.
From history of space rockets, they was definitely first created as "two purpose" (except, may Vanguard, which was totally civilian program), so their electronics considered possible radiation from nuclear war, but fortunately, in space found natural radiation, but slightly other type (spectrum). Currently, SpaceX just use industrial grade computers on rockets (not RAD hardened).
Well, look at tech details: for RAD digital electronics exists two types of problems.
1. Random spikes (switches) from high energy charged particles. Unfortunately, only Military RAD grade parts have integrated safety mechanisms, for Civ/Industry grades, could make shield with EM field and thick layer of protecting material, like Lead or even Uranium. When thyristor effect happen, need to power circle (turn off/on and boot), and this is risks source for mission, but most probably, it will withstand flight to Moon.
2. Aging of semiconductor structure from constant flow of particles with high penetration - basically it is just high speed diffusion, which destroy semiconductor structures. But for Earth-Moon environment, this is issue for long term operations (months or even years).
So, will work.
https://www.petervis.com/Vintage%20Chips/PowerPC%20750/RAD75...
The next generation (at least according to NASA) will be RISC-V variants:
https://www.zdnet.com/article/nasa-has-chosen-these-cpus-to-...
[1] Example: https://www.militaryaerospace.com/computers/article/16726923...
[2] Example: https://xiphos.com/product-details/q8
The reason why the controls of Dragon and Orion look the way they do is that they are no far off from modern digital cockpits of jets like the F-22 and F-35 and everyone is used to graphical interfaces and touch controls.
Having non intuitive interfaces that go against the bias astronauts and later on civilian contractors already have by using such interfaces over the past 2 decades will be detrimental to overall mission success.
The other reason for why they’ll opt to use commodity hardware is that if we are going back to space for real now you need to be able to build and deploy systems at an ever increasing pace.
We have enough powerful human safety rated hardware from aerospace and automotive there is no need to dig up relics.
And lastly you’ll be hard pressed to find people who still know how to work with such legacy hardware at scale and unless we will drastically change the curriculum of computer science degrees around the US and the world that list would only get smaller each year. We’re far more likely to see ARM and RISC-V in space than z80’s.
Kermit and xmodem probably aren't what you want to use, they are actually a higher level than what is normally used and would require a big overhead, if they even worked at all with latencies that can reach 5-10s. Search for the keyword "CCSDS" to get hints about data protocols used in space.
Here's kermit in space ... coincidentally in a 20 year old article. Software I wrote supported diagnosing kermit errors.
https://www.spacedaily.com/news/iss-03zq.html
I guess now I'm old.
[1] https://www.zdnet.com/article/nasa-has-chosen-these-cpus-to-... [2] https://www.nasa.gov/news-release/nasa-awards-next-generatio...
They've got their own chips and protocols going back just as far, like https://en.wikipedia.org/wiki/MIL-STD-1553
Previously calculators were a room full of people, all of which required food, shelter, clothing and ... oxygen.
The Apollo program consumed something like half of the United States’ entire IC fabrication capacity for a few years.
https://www.bbc.com/future/article/20230516-apollo-how-moon-...
The amazing thing is that they did manage to make it fit into 2 ft³, even though the integrated circuits it used had not yet been invented when the contract was written.
8KB of RAM! But hundreds of pounds vs 70lb for the AGC with fairly comparable capability (richer instructions/registers, lower initial clock rate).
The AGC was quite impressive in terms of perf/weight
TBH, it's kind of amazing that a custom computer from 50 years ago has the specs of a common IC/SoC today, but those specs scale with time.
Back then, consumers got nothing, governments got largge computers (room sized+), then consumers got microcomputers (desktop sized), governments got larger mainframes, consumers got PCs, government got big-box supercomputers,...
And now? Consumers get x86_64 servers, governments get x86_64 servers, and the only difference is how much money you have, how many servers can you buy and how much space, energy and cooling you need to run them.
well, "normal users" get laptops and smartphones, but geek-consumers buy servers... and yeah, I know arm is an alternative.
If course, from a certain point of view, they're many of the same people and money.
I wish more people understood this, and could better see the coming crisis.
AFAIK, this is typical of USB controller chips, which generally have about 20-30 I/O pins, but I’m sure there are outliers.
The AGC seems to have four 16-bit input registers, and five 16-bit output registers[2], for a total of 144 I/O pins total.
[1] https://ta.infinity-component.com/datasheet/9c-CYPD4126-40LQ...
[2] https://en.wikipedia.org/wiki/Apollo_Guidance_Computer#Other...