This book about CS principles is a great complement to that!
[1] https://www.amazon.com/How-Invent-Everything-Survival-Strand...
1. https://www.amazon.com/Code-Language-Computer-Hardware-Softw...
I am a self-taught developer and probably had 10 years experience in web development when I first read Code. I would have these little moments of revelation where my mind would get ahead of the narrative of the text because I was working backwards from my higher level understanding to Petzolds lower level descriptions. I think of this book fairly often when reading technical documentation or articles.
I recently listened to Jim Keller relate engineering and design to following recipes in cooking [1]. Most people just execute stacks of recipes in their day-to-day life and they can be very good at that and the results of what they make can be very good. But to be an expert at cooking you need to achieve a deeper understanding of what is food and how food works (say, on a physics or thermodynamic level). I am very much a programming recipe executor but reading Code I got to touch some elements of expertise, which was rewarding.
I heard an expression this weekend that I think is apt - a computer is to computer science as a telescope is to astronomy.
"Computer science is no more about computers than astronomy is about telescopes."
I did start getting lost around the second half of the book.
While I admire the Connecticut-Yankee optimism of the engineer, as a non engineer I am seriously skeptical about how a single engineer could know enough about the chemistry, materials, physics, CS etc. I can explain what a battery, or transistor is supposed to do but wouldn't have the foggiest idea how to actually make one. In this scenario are we leaving the bunker to break into Bell Labs (or some research university library at least)?
[1]: https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect
Another book I particularly like in the same style are Feynman (lesser known) lectures on computation: https://amzn.to/2SSoJaR where he takes you from single instructions all the way to quantum computing
And it's tackling pretty advanced material — a bunch of category-theory stuff that I have no idea about. This is exciting!
It looks like maybe it's unfinished: https://reasonablypolymorphic.com/book/tying-it-all-together... ends, "Really, we’re just getting started," and then (the current version of) the book ends. What a cliffhanger ending!
It doesn't seem to yet cover circuitry; the hardware it discusses seems to be a two-tape Turing machine, much like BF. The author seems to have been simulating the machine by hand to generate the included execution traces.
I had a hard time finding the source at first: https://github.com/isovector/reasonablypolymorphic.com/blob/... has a bunch of attribute-embedded &-escaped SVG (including XMLPIs!) that he almost certainly didn't type like that. That file is duplicated at https://github.com/isovector/reasonablypolymorphic.com/blob/... in the same format.
As it turns out, the source for that post is in https://github.com/isovector/reasonablypolymorphic.com/blob/..., with embedded Haskell to produce the SVGs. The build scripts looks like they might be in https://github.com/isovector/reasonablypolymorphic.com/tree/... and https://github.com/isovector/reasonablypolymorphic.com/tree/... but I can't tell where the code for generating SVG comes from. ("stack install" maybe? But then is it datetime, sitepipe, or strptime?) So I can't figure out how to fix the text in the SVGs to not crash into the diagram lines.
Careful about cloning the repo. It's a quarter gig!
> In the next chapter, we'll investigate how to make machines that change over time, which will be the basis for us to store information. From there it's just a short hop to actual computers!
so I think that grounding abstract computation in something that can clearly be constructed in real life is actually very much a concern of the book, even if he's not planning to cover IEEE-754 denormals, the cost of TLB flushes, or strategies for reducing the bit error rate due to metastability when crossing clock domains.
Must be the new overhyped term. "We start from first principles, just like Elon Musk".
After looking at Google trends, I might be wrong, so nevermind ;) https://trends.google.com/trends/explore?date=all&q=First%20...
When we say a model is a first-principles model, it means it is derived through fundamental equations like conservation of mass/energy, and other known relationships. This is in contrast to a data-driven model, where the underlying phenomena are not explicitly modeled -- instead the model is created by fitting to data.
Elon Musk became associated with it because he applied this form of thinking to business problems, i.e. by establishing the "fundamental equations" (as it were), questioning some basic assumptions and coming up with conclusions that are necessarily true but that no one else has arrived at.
Data-driven models (or the human equivalent: reasoning by analogy) are convenient to build and work well in the space the data has been collected in (~interpolation). However, they do not extrapolate well -- you cannot be sure they will work outside of the space of training data that the model has seen.
First-principles models (or the human equivalent: reasoning by principles) are generally more difficult to build and test (I worked on first-principles physics models for a decade -- they are a pain), but because they are built on a structure of chained principles/truths, they often extrapolate well even to areas where data has not been collected.
This is why if you want to improve efficiency and operations in known spaces, you use data-driven models (fast to build and deploy, accurately captures known behavior).
But for doing design and discovery (doing new things that have never been done before), first-principles models/thinking will carry you much farther.
[1] https://science.howstuffworks.com/life/inside-the-mind/human...
According to your link, it is also used a bit in South Africa (where Elon grew up), but less common in the US. Rather than being a new and overhyped term, perhaps it is a case of Elon using a term that is quite everyday to him, without realizing it is less familiar to the audience.
It is rarer to see it in CS, but it's more because CS used to deal with very simple theories up to recently than because of some fashion. As CS theories start to construct up from the earlier ones, it's appearing more.