I have made a few patches into the mm subsystem some simply inspired by researching for the articles.
I guess this is not the most up-to-date document?
it's also not correct. It doesn't have all 4GB "all to itself", because a portion of that (usually 1 or 2 GB) is mapped to the kernel.
A process does indeed have all 4GB of VIRTUAL adress space to itself. unless I'm misunderstanding you.
And regardless, I think the majority of systems running Linux today are phones, which usually have 4GB or less of RAM.
But I expect the FAQ was probably originally thinking about desktop or server systems, so, yeah, the intent there is probably out of date. Those types of systems are rarely 32-bit these days, and usually have a bit more than 4GB of RAM.
Even this is quickly becoming less and less true (for new phones). Even the Pinephone comes with 3 GB of RAM at a $200 price point, and that's inflated because of the niche, low volume nature of its production.
Samsung's "mid range" A series smartphones, for instance, start at 3GB at the absolute lowest end, with most models coming with 6 GB of memory. I expect this will be even more common in a year or two.
address sizes : 36 bits physical, 48 bits virtual
address sizes : 40 bits physical, 48 bits virtual
PS: I don't understand what this means, btw.Other than that, I also think that even when outdated, computing history is worth reading anyway, since it gives you a natural understanding of _why_ we do what we do these days. In your day job, it also gives you a different appreciation for what people did and why they did it, and why 'this horrible code' may have made sense at the time.
Furthermore, performance engineering is fundamentally about opposing code and hardware limitations. If hardware limitations are different, you'll get different code, but the principles remain the same.
If you're curious, write a basic emulator for older hardware (the NES is a great choice) , it's both fun and eye-opening!
Edit: the NES emulator will answer 'how do you fit super mario bros in 32k, and how can it run on such limited hardware?'
Sometimes, but a description of the state of the art in the past does not become a historical tract with the passage of time. The better ones do; others just become outdated.
(And, of course, hardware transactional memory can be used to implement 'software' transactional memory faster than in software.)
However, STM only really works well in languages that are pure by default, like eg Haskell (or perhaps Erlang might be close enough). In a language with pervasive mutations and side effects, it's too annoying to use. Microsoft tried to make it work for .net for a while, and gave up.
In linux (in sane configurations) allocations are just preorders.
EDIT: I can't reply below due to rate limiting:
I'd argue that overcommit just makes the difference between allocation and backing very stark.
Your memory IS in fact allocated in the process VMA, it's just the anonymous pages cannot necessarily be backed.
This differs, obviously, in other OSes as pointed out. Also differs if you turn overcommit off but since so much in linux assumes it your system will soon break if you try it.