I'm an old fart yeeted out of the workforce by long covid. My only goals at this point are enjoying the time I have left, and seeing if I can get the BitGrid model of computation adapted before I age out.
If I'm right, and it (BitGrid) works, we could collectively save 95% of the power and silicon required to process LLM and other flow-heavy computation, by finally getting rid of the Von Neumann's premature optimization of compute, that started out life by slowing down the ENIAC by 65%.