The argument that computers need to go from a useless state S0 to a useful state S1 and that takes a long time is just bullshit. The question is, how close can you bring S0 to S1. This is a question of optimization. What happens is that S1 is different for different computer configurations, and processing needs to happen. Then the question becomes, how much effort do software companies spend optimizing that processing? The answer is "as much as they feel that they need to."
Computer boot times have been constant over many years because performance gains enable companies to add more features. As long as the boot time stays within tolerable bounds for the typical user, there is no need to optimize further. But there are tons of other computer systems in the world (particularly embedded) that must boot instantly, and they do so.
Maybe if you are restricting to software and time from 'hardware ready', but I doubt this is possible from power application. There are a lot of hardware processes that need to take place before any real code can execute, e.g. locking clocks, memory training, etc.
The computer and OS config doesn't change often.
And it's doesn't seem slow to test that.
So caching at boot should work.
And caching is a low effort optimization. So it should have been implemented early.
But we don't have caching at boot, why?
Remember; the computer could quite literally have been completely reconfigured out from under the BIOS. Testing that everything is the same from the computer's point of view is anything but straightforward.
Also, most recognizable boot time isn't spent waiting on BIOS and POST; it's spent waiting for either the boot loader, disk encryption, or the OS to spin up all the background stuff (graphics stacks, session managers, HAL's, message busses, desktop environments, network services... etc) it needs to offer for fulfilling the particular purpose the system is intended for.
Computation is amazingly fast, but by no means either free or magic.
Everything takes time. However, I make no excuses for Windows: They had 3-5 second boot nailed for a while. Then, they done bloated it to the point of unrecognizability.
This feature was introduced years ago. What is stopping other OSes from doing the same?
When I got my 386DX(ha!) I thought the boot process was so much of a hassle that I seriously believed it would just be a matter of months until that problem got fixed and newer computers would be instant on again. Oh boy was I wrong.
That is exactly how I feel about many other regressions in tech!
Including the promises of the internet and online gaming, or when the industry wanted to sell 3D cards but 3D games were uglier than 2D/hand-drawn games for several years..
I have a cheap laptop (it does have an SSD) that boots from pushing power to a minimal but perfectly usable i3 window manager system in a second.
Maybe the 486 was the low end of what you could buy at the time, but not everyone bought a new computer. My parents upgraded their 386 from Windows 3.1 to 95. It was slow, but usable.
Of course it'd run with much less.
I have a revisionist view of Microsoft - I no longer believe their early products of BASIC and MS-DOS were enabling. And Windows got the world stuck in the desktop PC paradigm when mainstream computing was networked, not moving floppy disks around.
Nowadays with Ubuntu and a decent disk I don't have boot speed problems. If I want the machine rebooted then it is all up and loaded where I last left off within a minute. That includes the shutdown with web server, proxy server and networked drives to deal with. So I am asking, does everyone else have a huge boot delay? Is it really something like three minutes on Windows or Apple to shutdown and restart the machine?
Since I disabled IPV6 I don't even need to reboot - it used to start up funny after a suspend, which it does not do now, the network actually works so no reboots needed.
The other performance difference is concerning adblockers. Recently I turned off the adblocker for a website with a form I needed to fill in. Suddenly the fans were on and my machine was crawling along!
Today with free and open source software you can have a modest machine that goes at speed or you can have a paid for operating system and not turn off the adverts to have an experience akin to wading through a lake of treacle. The developer tools seem fine in open source world so I am wondering if going with Windows really is the sane and rational choice for productivity. I can understand having a Windows machine for testing and doing Windows things but for actual work the open source world just has so much more 'professional' productivity. Boot speed being an indicator of this.
Since I only reboot for OS updates (at which point the update process also runs which is much slower), I had to test this out.
My MacBook Pro running the latest MacOS, takes about 20 seconds to shut down, and then 50 seconds to boot again - including restoring the software that was running before shutdown [browser w/ tabs, IRC client, etc] since it does that before hiding the boot progress screen.
If you hunt around a bit for things like "atari 800 memory map" you can find diagrams showing how this all worked. In fact on some systems, you could selectively map certain things from ROM into the memory map to get more RAM if you wanted.
Today, the move to put the OS onto disk, the memory hierarchy, and virtual memory eliminate this approach. You basically have to let the virtual memory system map things from disk into some page space, and often things need to be copied into RAM, work their way up the cache hierarchy and be executed somewhere in order to set the system into an initialized state. I've honestly forgotten nearly all of the details so maybe somebody else can provide a better explanation.
So the reason computer can't boot instantly:
1) We have a complex memory system these days that prevent mapping the OS into the memory address space.
2) OS's aren't in ROM anymore. They sit on disk in a state different than when under operation.
3) Simple engineering willpower. Nobody wants to bother to figure it out because once the system is booted it doesn't really matter (with modern sleep modes and whatnot).
I say that only partly tongue-in-cheek. The home machines in that era were right on the cusp; they were complex enough that you could do Genuinely Interesting Stuff with them, and simple enough that you could still hold the whole machine in your head. The instant feedback of REPL being the primary interface, and the almost zero-time cost of rebooting when an experiment didn't work out, made for a spectacular tool for self-learning.
The complex memory system isn't the culprit here. It's one of the easiest parts to set up.
Of course mileage on a laptop with non-quality parts will vary.
1. Hardware
Hardware is a big problem for boot times. Most hardware is poorly standardized, or not standardized at all, or doesn't even follow the standards, or tries to be backwards compatible with older hardware, or is just plain buggy. This means that the initialization code has to poll and retry and work around a whole bunch of things just in case the hardware happens to be slow in responding or gives a weird response. This is the main reason why POST is so godawful slow, and why the initial linux boot sequence takes so long. Apple hardware can boot quicker because they control what hardware is in the machine and can optimize their initialization code for it.
2. Software
The operating system stack is HUGE. There's a LOT of state that needs to be initialized, and most of it is not very efficient (we tend to optimize the runtime operation vs the startup operation of a software package). You absolutely could cut the software component of an OS boot sequence by an order of magnitude, but the development costs would be massive, and the gains pathetic in terms of the work-over-time the machine will do over its lifespan.
3. Protocols
A large number of the protocols we use for inter-process and inter-device communication have poorly designed latency characteristics. Either they are too chatty (requiring multiple messages back and forth to do a task), or have ill-defined timeouts (requiring clients to wait longer than they should), or ambiguous states, or some poorly built implementation has become the de facto standard. This is an area I'm personally tackling.
4. Formats
We use a number of formats for the wrong kinds of things. Appending to a tgz file, for example, has horrendous time implications, especially as the archive size grows.
Now what would really be interesting is if we had the same kind of knowledge about the typical causes of userspace software and distributed systems being slow to start. In my experience, for proprietary software, it's mostly because people inserted random calls to sleep()...
As for anything POSIX, it's because UNIX is shitware; it took a long time to boot decades ago and it takes a long time to boot now, because more garbage has been piled on top.
It's possible for a good operating system to boot in less than a second, however. People have simply been conditioned to accept this unacceptable state of affairs, no different than a TV that takes ten seconds to become usable for no reason other than the manufacturer was lazy.
I have a few moderate boot time improvement projects on embedded Linux under my belt. The typical low hanging fruit are hardcoded wait times in boot loader and drivers, unnecessary drivers, unnecessary features (did you know that PPP - yes, the modem thing - support takes a while to initialize?), slow to mount filesystems, bad choice of storing / compressing the kernel, and delays in the init system (run the application as soon as possible instead). More hardcore techniques are various methods to reduce dynamic linker overhead and reordering blocks on storage in the order they will be read at startup.
I guess part of the difference is that you explicitly enumerate all hardware and drivers in a device tree on embedded rather than figuring it out on the fly like on a desktop/server machine.
Magic trick? Open RC + no crapware + lightweight DE
I use no preloading daemons, prelinkers, gold, or malloc hacks
There's nothing inherent to posix that stops us from booting in seconds.
That said, even CRT had 'boot' time. The screen would need a few seconds to finally warm up and be fully on. But it was still a mind free system, you knew that your were done with the boot process and could enjoy 90% of functionality.
The Asus PG27UQ (regarded as the current-best consumer 27" 4K LCD) takes almost 20 seconds to start up, presumably to initialize a bunch of state for its FPGA.
Of course, part of that initialisation is self-tests, harkening back to the O.G. POST[0] days.
In the following years, I noticed significant improvements both to boot times in Windows and to their "sleep"-type features. I suspect this was motivated at least in part by competition from Ubuntu.
Ubuntu has such tiny market share that I strongly doubt this.
Why does boot time take ~10 seconds? CPUs run in GHz, meaning 10^9 operations per second. That means that booting takes around 10^10 operations. Why that order of magnitude and not any other? Obvious answer is "that's where the optimization efforts stopped", but could it be 10^9 or 10^8?
Obviously booting time isn't just (or mostly) CPU, but the question extends to other peripherals: Harddrives access is in the 10^8 B/s range. RAM is 10^9 B/s. USB ~ 10^7 B/s. Wifi 10^6 B/s. etc. Why does booting, with all it's internal sub processes still take ~ 10^1 seconds and not any other OoM?
The real question is not whether a computer can boot in a second, it's whether it can boot in a frame.
"If I saved all my work, why do I need to properly shut down my computer? Why does it take so long to shut down?"
Eh, it takes from 10-15 seconds on my computer from pressing the power button to getting to GRUB. After that, it's only another 10 seconds or so to boot Ubuntu.
I think this can only be solved on an architectural level by introducing "specialization" in a "general" manner. In other words, have a supervising process monitor times and adjust the system to the local hardware and software demands. The adjustments to be good probably require some kind of learning mechanisms, which you also don't want to have in a kernel itself. Just my two cents.
BCE captures this configured Multics hardcore memory image, and saves it to a file in the BCE disk partition. BCE then returns control to Multics hardcore, so it can continue system start-up.
Subsequent (fast) boots from BCE boot the preconfigured Multics memory image, loading it from disk into memory, and transferring into the image, just as was done when the image was first saved.
Receivers work similarly, using PLLs to downmix the signal they want, generally, and while they can begin trying to receive while waiting on it to stabilize, it's usually pointless because you'll just be getting unintelligible garbage from the noise or wrong frequency.
But a typical server BIOS takes 3-5 minutes do its thing. If you're lucky you might be able to disable enough stuff to shave off a minute. Insanity.
I run my most critical piece of home infrastructure (PiHole, of course) bare-metal on an ancient NUC-like thing with a $20 SSD because it boots and is responding to network requests in like 6 seconds.
I know that this was the case at some point when I checked during the Windows 8 years, I don't know if this is still true.
But I do wonder why WBI generation and automatic usage upon first boot isn't more common (or possible at all)...
How much time does it take for a circle to become round? None at all. It happened instantly.
Booting, as in detecting and initializing the hardware, including the RAM, should only be needed when hardware is changed, and even then it should be possible to cache the state of any unchanged parts.