We might have had to manage with just a few MB of RAM and efficient ARM cores running at maybe 30 MHz or so. Would we still get web browsers? How about the rest of the digital transformation?
One thing I do know for sure. LLMs would have been impossible.
It’s kind of the ideal combination in some ways. It’s fast enough to competently run a nice desktop GUI, but not so fast that you can get overly fancy with it. Eventually you’d end up OSes that look like highly refined versions of System 7.6/Mac OS 8 or Windows 2000, which sounds lovely.
Hypercard was absolutely dope as an entry-level programming environment.
If we really got stuck in the hundreds of MHz range, I guess we’d see many-core designs coming to consumers earlier. Could have been an interesting world.
Although, I think it would mostly be impossible. Or maybe we’re in that universe already. If you are getting efficiency but not speed, you can always add parallelism. One form of parallelism is pipelining. We’re at like 20 pipeline stages nowadays, right? So in the ideal case if we weren’t able to parallelize in that dimension we’d be at something like 6Ghz/20=300Mhz. That’s pretty hand-wavey, but maybe it is a fun framing.
It would probably need a decent memory controller, since it wouldn't be able to dedicate 32 pins for a data bus, loads and stores would need to be done wither 8 or 16 bits at a time, depending on how many pins you want to use for that..
What killed that balance wasn't raw speed, it was cheap RAM. Once you could throw gigabytes at a problem, the incentive to write tight code disappeared. Electron exists because memory is effectively free. An alternate timeline where CPUs got efficient but RAM stayed expensive would be fascinating — you'd probably see something like Plan 9's philosophy win out, with tiny focused processes communicating over clean interfaces instead of monolithic apps loading entire browser engines to show a chat window.
The irony is that embedded and mobile development partially lives in that world. The best iOS and Android apps feel exactly like your description — refined, responsive, deliberate. The constraint forces good design.
The backstory is that in the late 2050s when AI has its hands in everything, humans loose trust of it. There are a few high profile incidents - based on AI decisions -, which cause public opinion to change, and an initiative is brought in to ensure important systems run hardware and software that can be trusted and human reviewed.
A 16bit CPU architecture - with no pipelining, speculative execution etc is chosen, as it's powerful enough to run such systems, but also simple enough that a human can fully understand the hardware and software.
The goal is to make a near-future space exploration MMO. My Macbook Pro can simulate 3000 CPU cores simultaneously, and I have a lot of fun ideas for it. The irony is that I'm using LLMs to build it :D
My Vic20 could do this, and a C64 easily, really it was just graphics that were wanting.
I was sending electronic messages around the world via FidoNet and PunterNet, downloaded software, was on forums, and that all on BBSes.
When I think of the web of old, it's the actual information I love.
And a terminal connected to a bbs could be thought of as a text browser, really.
I even connectd to CompuServe in the early 80s via my C64 through "datapac", a dial gateway via telnet.
ANSI was a standard too, it could have evolved further.
Prodigy established a (limited) graphical online service in 1988.
Anyhow, the WWW was invented in 1989/1990 on a 25Mhz 68040 NextCube. Strictly speaking, the 68040 and NextCube weren't released until 1990 (and the NeXT was an expensive machine) but they were in development in 1989 so that's not a stretch. Anyhow, WWW isn't really much more than hypercard (1987) with networking.
Both the hardware and the forth software.
APIs in a B2B style would likely be much more prevalent, less advertising (yay!) and less money in the internet so more like the original internet I guess.
GUIs like https://en.wikipedia.org/wiki/SymbOS
And https://en.wikipedia.org/wiki/Newton_OS
Show that we could have had quality desktops and mobile devices
https://www.symbos.org/shots.htm
This is what slow computers with a few hundred kB of RAM can do.
I know it’s a meme on HN to complain that modern websites are slow, but this is a perfect example of how completely distorted views of the past can get.
No, browsing the web in the early 90s was slooow. Even simple web pages took a long time to load. As you said, internet connections were very slow too. I remember visiting pages with photos that would come with a warning about the size of the page, at which point I’d get up and go get a drink or take a break while it loaded. Then scrolling pages with images would feel like the computer was working hard.
It’s silly to claim that 90s web browsers ran about as fast as they do today.
Had we stopped with 1990s tech, I don't think that things would have been fundamentally different. 1980s would have been more painful, mostly because limited memory just did not allow for particularly sophisticated graphics. So, we'd be stuck with 16-color aesthetics and you probably wouldn't be watching movies or editing photos on your computer. That would mean a blow to social media and e-commerce too.
There is certainly a level of "good enough" that's come in, but a lot of that comes not from devs but from management.
But I'll say that part of what has changed how devs program is what's fast and slow has changed from the 90s to today.
In the early 90s, you could basically load or store something into memory in 1 or 2 CPU cycles. That meant that datastructures like a linked list were a more ideal than datastructures like an array backed list. There was no locality impact and adding/removing items was faster.
The difference in hard drive performance was also notable. One wasteful optimization that started in the late 90s was duplicating assets to make sure they were physically colocated with other data loading. That's because the slowest memory to load in old systems came from the hard drive.
Now with SSDs, disk loading can literally be nearly as fast as interactions with GPU memory. Slower than main memory, but not by much. And because SSDs don't suffer as much from random access, it means how you structure data on disk can be wildly different. For example, for spinning disks a b-tree structure is ideal because it reduces the amount of random accesses across the disk. However, for an SSD, a hash datastructure is generally better.
But also, the increase of memory has made tradeoffs a lot more worth it. At one point, the best thing you could do is sort your memory in some way (perhaps a tree structure) so that searching for items is faster. That is in-fact built into C++'s `map`. But now, a hash map will eat a bit more memory, but the O(1) lookup is much more ideal in general for storing lookups.
Even when we talk about the way memory allocation works, we see that different tradeoffs have been made than would be without a lot of extra memory.
State of the art allocators use multiple arenas and memory allocators to allow for multithreaded applications to allocate as fast as possible. That does mean you end up with wasted memory, but you can allocate much faster than you could in days of old. Without that extra memory headroom, you end up with slower allocation algorithms because wasting any space would be devastating. Burning the extra CPU cycles to find a location for allocation ends up being the right trade off.
We had ELIZA, and that was enough for people to anthropomorphize their teletype terminals.
As much as I like my Apple Silicon Mac I could do everything I need to on 2008 hardware.
The ones that "could have happened" IMO are the transistor never being invented, or even mechanical computers becoming much more popular much earlier (there's a book about this alternate reality, The Difference Engine).
I don't think transistors being invented was that certain to happen, we could've got better vacuum tubes, or maybe something else.
People that time were not actually sure how long the improvements would go on.
Yes, just that they would not run millions of lines of JavaScript for some social media tracking algorithm, newsletter signup, GDPR popup, newsletter popup, ad popup, etc. and you'd probably just be presented with the text only and at best a relevant static image or two. The web would be a place to get long-form information, sort of a massive e-book, not a battleground of corporations clamoring for 5 seconds of attention to make $0.05 off each of 500 million people's doom scrolling while on the toilet.
Web browsers existed back then, the web in the days of NCSA Mosaic was basically exactly the above
Did everyone forget the era of web browsing when pages were filled with distracting animated banner ads?
The period when it was common for malicious ads to just hijack the session and take you to a different page?
The pop-up tornados where a page would spawn pop ups faster than you could close them? Pop unders getting left behind to discover when you closed your window?
Heavy flash ads causing your browser to slow to a crawl?
The modern web browsing experience without an ad blocker feels tame compared to the early days of Internet ads.
https://en.wikipedia.org/wiki/PLATO_(computer_system) is from the 1960s, so, technically, it certainly is possible. Whether it would make sense commercially to support a billion users would depend on whether we would stay stuck on prices of the eighties, too.
Also, there’s mobile usage. I would it be possible to build a mobile network with thousands of users per km² with tech from the eighties?
Edit: oh I thought you meant if we were stuck in 6502 style stuff. With megabytes of ram we'd be able to do a lot more. When I was studying we ran 20 X terminals with ncsa mosaic on a server with a few CPUs and 128GB RAM or so. Graphic browsing would be fine.
Only when Java and JavaScript came on the scene things got unbearably slow. I guess in that scenario most processing would have stayed server-side.
BBSes existed at the same time and if you were into BBSes you were obsessive about it.
We'd probably get MP3 but not video to any great or compelling degree. Mostly-text web, perhaps more gopher-like. Client-side stuff would have to be very compact, I wonder if NAPLPS would've taken off.
Screen reader software would probably love that timeline.
Only thing that killed web for old computers is JAVASCRIPT.
Ironically, now I'm using an ESP32-S3, 10x more powerful, just to run Iot devices.
We would have seen much less desktop apps being written using Javascript frameworks.
There was Lynx text browser that was ported even to MS-DOS. I was using it until about 2010. It was a great browser until websites become unusable.
Maybe they could, as ASICs in some laboratories :)
I can see it now… the a national lab can run ImageNet, but it takes so many nodes with unobtanium 3dfx stuff that you have to wait 24 hours for a run to be scheduled and completed.
HotWired (Wired's first online venture) sold their first banner ads in 1994.
DoubleClick was founded in 1995.
Neither were limited to 90's hardware:
Web browsers were available for machines like the Amiga, launched in 1985, and today you can find people who have made simple browsers run on 8-bit home computers like the C64.
That said, a retro laptop this thick would look really nice in stained wood.
What printer are you using?
Funnily enough I've been musing this past month would I better separate work if I had a limited Amiga A1200 PC for anything other than work! This would nicely fit.
Please do submit to HackaDay I'm sure they'd salivate over this and it's amazing when you have the creator in the comments. Even if just to explain no a 555 wouldn't quite achieve the same result. No not even a 556...
The extended graphics commands seem to allow X/Y positioning with an 8-bit color.
I think the picture shows an 80x25 screen?
What gives here? Anyone know what's going on?
EDIT: I can now see that is does have bit mapped graphics. It must have a built-in serial like terminal with graphics capabilities.
EDIT2: Probably using this chip: https://www.adafruit.com/product/1590
:-)
Any time I see this phrase I know these are my people.
I believe there will come a day where people who can do this will be selling these on the black market for top dollar.
It occurred to me that given the 6502's predictable clock cycle timings it should be possible to create a realtime disassembler using e.g. an Arduino Mega 2560+character lcd display attached to the 6502's address/data/etc pins.
Of course, this would only be useful in single-stepping/very slow clock speeds. Still, I think it could be useful in learning how the 6502 works.
Is there relevant prior work? I'm struggling with my google fu.
It always mildly tickles me when retrocomputer designs use anachronistic processors way more powerful than the CPU in their design - in this case, there’s a ATmega644 as a keyboard controller (64K ROM - although only 4K RAM, up to 20MHz) and presumably something pretty powerful in the display board.
Takes me back to a time when a laptop would encourage the cat to share a couch because of the amount of heat it emitted.
Amazingly quick as well. Pointless projects are so much better and more fun when they don't take forever!
What I really would love: modern (continously built) modern (less than 10 years old tech) devices ryf-cetified.
Please, what is your trick, is it a variation on bank memories?
LOAD "$", 8Not 64?
(Edit: I see part of the address space is reserved for ROM, but it still seems a bit wonky.)
Add a DIN plug and record programs in Kansas City Standard on a cassette recorder. Could be a walkman. A floppy (full 8" type) was a luxury. Almost a megabyte! imagine what you can do.. when a program is the amount of text you can fit in the VBI of a ceefax/teletext broadcast, or is typed in by hand in hex. Kansas city standard is 300 bits/second and the tape plays in real-time so a standard C60 is like 160kb on both sides if you were lucky: it misread and miswrote a LOT.
I used to do tabular GOTO jump table text adventures, and use XOR screen line drawing to do moving moire pattern interference fringes. "mod scene" trippy graphics!
Thats a mandelbrot in ASCII, the best I've seen, on the web page. Super stuff.
People wrote tiny languages for the 6502. integer only but C like syntax, or Pascal or ALGOL. People did real science in BASIC, a one weekend course got you what you needed to do some maths for a Masters or PHD in some non CS field.
My friends did a lot more of this than me. Because I had access to a Dec-10 and a PDP-11 at work and later Vax/VMS and BSD UNIX systems, I didn't see the point of home machines. A wave I wish I'd ridden but not seeing the future emerge has been a constant failure of mine.