As the AND gate 4 gates up the chain switches the NOT gate 4 gates down the chain starts to send different and unstable signals which may or may not be interpreted as a 1 or 0 in the downstream gate.
That's the reason computers have a clock, to make sure all transistors in a given stage of a CPU reach a steady state before moving on to the next instruction.
This is why it's probably a good idea to work with a HDL instead of just trying to wing it.
Another game Turing Complete ( https://store.steampowered.com/app/1444480/Turing_Complete/ , https://news.ycombinator.com/item?id=38925307 ) lets you build a CPU from basic gates and a much larger (and customizable) instruction set. It also has the concept of gate delays, thopugh it doesn't visually show the unstable output as Silicon Zeroes does.
You only got (back in my day) NOT (the torch) and OR gates (wiring next to each other), and everything was built out of them. Signal repeaters had delay, the gates had delay, so you naturally had to build a clock to synchronise it all.
The main benefit that I enjoyed was that you could see the signals physically propagating from one end of your cpu to the other in real time (clock cycles were in the range of 1-4 seconds), flowing from the instructing fetching, decoding, dispatch, logic, writing to back memory, etc. Seeing signals slowly crawl from one end to the other naturally introduced you to pipelining (it even happens naturally if you increase the clock without thinking about what will actually happen: the next instruction starts decoding before the previous one is done, more parts of your cpu start lighting up at once, and oops now you have a pipelined cpu).
Even the scales match; many learners are surprised that the actual ALU is the tiny thing in the corner that you can barely see, and all the giant stacks of rows you actually saw is memory and cache. Even in minecraft, most of your CPU is not logic :)
Also really taught me how asics are much faster: you could build a tiny compact multiplier that multiplied hundreds of times faster than your giant cpu running a multiplication algorithm.
Looks like they community I learnt from is still around actually https://openredstone.org/, even if all the old forum posts seems to be gone now. There were some geniuses on that place building computers with multi-tier caches, full pipelining, out-of-order (albeit primitive) execution, SIMD-capable CPUs, all in redstone.
Make sure to press the "Play" button at the top for the simulator to actually run.
Here I was thinking[1][2] the reason computers had clocks was merely a consequence of the synchronous architectures that characterize them.
[1] https://en.wikipedia.org/wiki/Metastability_(electronics)
[2] https://en.wikipedia.org/wiki/Quasi-delay-insensitive_circui...
You are correct that clock free designs exist. But calling it a mere consequence of sync design seems to be a misunderstanding of why sync design has a clock in the first place.
[1] http://visual6502.org/JSSim/index.html
[2] https://github.com/mist64/perfect6502/blob/268d16647c6b9cb0c...
During my CE studies I got hooked heavily on FPGAs and HDLs, but the amount of times I actually ran into a timing problem, which only occured on hardware, because of these delays, can be counted on one hand. Playing around with this crap since about 2018. The worst one was where I got clock domain crossing wrong while trying to get DDR3 RAM working for my master thesis. It worked flawless in simulation. Took me weeks to find the wrong status signal
Yes, they can be a serious problem. Yes, you should know about it. But no, it's not necessary
EDIT: After a bit of thinking and bouncing ideas off of ChatGPT, one approach might be to have an SR latch that gets initialized to some arbitrary valid output, which can then use that starting output to calculate future states. From there, you could build up more complicated elements.
I did this and not only was it great fun it taught me loads about clocks and busses etc. that was all magic before.
Sure, there's a slight power and latency advantage in avoiding flip flops themselves but every tool has slack stealing and so the core advantage everyone seems to claim clockless (that is, the ability to have different latencies for all your different pipelines) isn't that unique anymore and hasn't been for a long time. Is it dead, can I stop worrying about the async demons :)
I don't know how CPUs work so I simulated one in code - https://news.ycombinator.com/item?id=19969321 - May 2019 (172 comments)
His channel has a lot of other really great stuff on general electronics as well.
https://thumbs.worthpoint.com/zoom/images2/1/0619/05/vintage...
I built a simple 8-bit computer using a Z80 chip. You can read about it a bit more here https://www.jake-reich.co.uk/zx-jakey
I just recently wrote a JavaScript emulator [1] for a simple 8-bit CPU I had built with logic gates. Both were fun projects, but one took a weekend whereas the other took about 4 months. Having skimmed through the author's code, I totally understand why it took them so much longer to get their's working. It's implemented with gate-level simulated logic gates, which is an impressive achievement. It's so much easier to just emulate whole components like I did.
[1] https://naberhausj.com/miscellaneous/8-bit-computer/page.htm...
What I learned from Ben Eater is that a non-cheaty solution would have been to write a simple UART serial "device" and then interacted with the CPU via serial communication with a terminal.
CPU just happens to be one of them. Fascinating that such a seemingly complex marvel of innovation is just something that everyone can do.
Of course, there's lots of groundwork and you need to know "the trick" (eg. that you can use transistors to make computation machines)... but it's easily within your grasp.
From memory it showed instruction decode, execution, cache and memory.
Unfortunately I've never been able to find it, because all the google results are about running DOS games and/or DOSBox.
A search engine won't help you, but a local llm might.
ChatGPT came up with what I thought was a halucination: "CPU Sim".
After some searching, I found a Facebook post[1] with a photo of it running. That looks mostly like what I recall. The screenshot is from a 1987 version.
I can also find references to a Windows version, but no copies of either. I thought there were more features to it, but perhaps I'm misremembering or I've also thinking of Mikrosim that Tomte mentions in the sibling comment.
[1] https://www.facebook.com/RMUserGroup/posts/pfbid0uVrCYMsWmJa...
That one is looking more familiar the more screenshots I see of it, but I think I might be mixing memories of two programs - this, and "CPU Sim" which I mention in a reply to the sibling comment.
If you need real instructions (without an emulator like qemu doing its own translation and messing up timing), you could use a simulator like Gem5. That’s a bit more work and a lot more compute per simulated instruction.