There was a single fixed location in the entire system (address 0x4 aka ExecBase), and everything an AmigaOS application required was 'bootstrapped' from that one fixed address.
All OS data structures were held together by linked lists, everything was open and could be inspected (and messed up - of course terrible for security, but great for learning, exploring and extending).
Everything I learned about after it was a huge disappointment, including Mach. Particularly because it demystified the OS. Just a bunch of of lists, and due to the OO nature, they were the same kinds of lists.
Here's what a node looks like: next, previous, a type, a priority, a name.
A task? A node. With a bunch of extra state.
An interrupt? A node. With a lot less extra state.
A message? A node. With an optional reply port if the message requires a reply.
Reply port? Oh, that's just a port.
A port? Yeah, a node, a pointer to a task that gets signaled and a list of messages.
How do you do I/O? Send special messages to device ports.
No "write() system call", it's queues at the lowest levels and at the API layer.
* Assigns. Basically aliases for paths, but ephemeral and can be joined together. E.g. the search path for executables is the assign C:. The search path for dynamic libraries ins libs:. I've added basic, superficial support for assigns to my shell. It's a hack, but being able to just randomly add mnemonics for projects is nice, and not having to put them in the filesystem as a symlink somehow also feels nicer even if it only saves a few characters.
* Datatypes. AmigaOS apps can open modern formats even if the app hasn't been updated for 30 years as long as they use datatypes: Just drop a library and descriptor file into the system.
* Screens. I'm increasingly realising I want my wm to let apps open their own virtual desktops, and close them again, as a nice way of grouping windows without having to manually manage it, and might add that - it'd be fairly easy, and on systems that don't support it the atoms added would just be a no-op. The dragging was nice to show off at the time, but less important. Ironically, given the Amiga was one of a few systems offering overlapping windows when it launched, screens often served as a way for apps themselves to tile their workspaces on a separate screen/desktop, and my own wm setup increasingly feels Amiga-ish - I have a single desktop with floating windows and a file manager, just like the Amiga Workbench screen, and a bunch of virtual desktops with tiling windows.
In terms of the API, one of the things I loved was more something that evolved: The use of "dispatch/forwarding" libraries, like e.g. XPK, that would provide a uniform API to do something (like compression) and an API for people implement plugins. So much of the capabilities of Amiga apps are down to the culture of doing that, and which Datatypes was an evolution of, that means the capabilities of old applications keeps evolving.
You can still have that Amiga feeling on old PCs by using AROS: https://aros.sourceforge.io/
On Macintosh, the whole GUI ran practically in the active app's event loop. The whole system could be held up by an app waiting for something.
Microsoft made the mistake of copying Apple when they designed MS-Windows. Even this day, on the latest Windows, which although it has had preemptive multitasking since 1995, a slow app can still effectively hold up the user interface thus preventing you from doing anything but wait for it.
When Apple in the late '80s wanted to make their OS have preemptive multitasking, they hired the guy who had written Amiga's "exec" kernel: Carl Sassenrath.
Could you explain what you mean here? If you were to make your event loop or wndprocs hang indefinitely it would not hang the Windows interface for the rest of the machine, it would just cause ANR behavior and prompt you to kill the program. As far as I can remember it's been that way since at least Windows 2000.
AFAIK Windows is supposed to boost the CPU priority of the UI during user input, but apparently that doesn't work.
AmigaOS also boosted the CPU priority of the UI during mouse movement, except it actually worked.
PS: and instead of fixing the issue from the ground up in the OS (which admittedly is probably impossible) the VS team instead added a feature called 'Low Priority Builds':
[1] https://developercommunity.visualstudio.com/t/Limit-CPU-usag...)
[2] https://devblogs.microsoft.com/cppblog/msbuild-low-priority-...
Technically the Amiga could display a rock solid hires picture but only on a special monitor that I personally never saw.
The priority on the Mac was to have a high quality monitor for black and white graphics. They put a lot of effort into drawing libraries to make the most of in-built display.
The result was that the Amiga was perfectly fine for playing games or light word processing but if you actually needed to stare at a word processor or spreadsheet for 8 hours a day you really wanted a Mac.
And the earliest ARM machines ran rings around the Amiga because they had a custom-designed RISC CPU, so they could dispense with the custom co-processors. (They still cost a lot more than the Amiga, since they targeted the expensive education sector. Later on ARM also got used for videogame arcades with the 3DO.)
By contrast, there's a story about some Microsoft engineers taking a look at the Macintosh and asking the Apple engineers what kind of custom hardware they needed to pull off that kind of interface. The Apple guys responded, "What are you talking about? We did this all with just the CPU." Minds blown.
The designers of the Mac (and the Atari ST) deserve mad credit for achieving a lot with very little. Even though, yes, the Amiga was way cooler in the late 80s.
I know this first hand, because I got my first email address with CompuServe, running their software under emulation, while using my Amiga's dial-up modem. (I had to sneak the Mac ROM images from the computers at school...)
This was due to several factors, chief of which was that the SC2000 Amiga port was made under extreme time pressure and, probably, very low budget. Later patches alleviated that to some degree, but patching your game in 1993? Who did that? What you got on your floppies was usually what you were stuck with barring some extreme cases of negligence.
"no dynamic linking" (by implementing dynamic linking)
"no zombies" (as long as your programs aren't buggy)
I fail to see any meaningful distinction from what we have today. If it was more reliable, it was due to being smaller in scope and with a barrier to entry.
On modern Linux systems you can even do separate sets of memory permissions within a single process (and single address space), with system calls needed only at startup; see `pkeys(7)`.
https://www.man7.org/linux/man-pages/man7/pkeys.7.html
(note however that there aren't enough pkeys available to avoid the performance problem every microkernel has run into)