> (as far as I'm aware) C is the only language that lets you specify a binary format and just use it.
I assume they mean:
struct foo { fields; };
foo *data = mmap(…);
And yes, C is one of relatively few languages that let you do this without complaint, because it’s a terrible idea. And C doesn’t even let you specify a binary format — it lets you write a struct that will correspond to a binary format in accordance with the C ABI on your particular system.If you want to access a file containing a bunch of records using mmap, and you want a well defined format and good performance, then use something actually intended for the purpose. Cap’n Proto and FlatBuffers are fast but often produce rather large output; protobuf and its ilk are more space efficient and very widely supported; Parquet and Feather can have excellent performance and space efficiency if you use them for their intended purposes. And everything needs to deal with the fact that, if you carelessly access mmapped data that is modified while you read it in any C-like language, you get UB.
We're so deep in this hole that people are fixing this on a CPU with silicon.
The Graviton team made a little-endian version of ARM just to allow lazy code like this to migrate away from Intel chips without having to rewrite struct unpacking (& also IBM with the ppc64le).
Early in my career, I spent a lot of my time reading Java bytecode into little endian to match all the bytecode interpreter enums I had & completely hating how 0xCAFEBABE would literally say BE BA FE CA (jokingly referred as "be bull shit") in a (gdb) x views.
Big endian is as far as I know extinct for larger mainstream CPUs. Power still exists but is on life support. MIPS and Sparc are dead. M68k is dead.
X86 has always been LE. RISC-V is LE.
It’s not an arbitrary choice. Little endian is superior because you can cast between integer types without pointer arithmetic and because manually implemented math ops are faster on account of being linear in memory. It’s counter intuitive but everything is faster and simpler.
Network data and most serialization formats are big endian by convention, a legacy from the early net growing on chips like Sparc and M68k. If it were redone now everything would be LE everywhere.
I'm not sure how useful it is, though it was only added 10 years ago with GCC 6.1 (recent'ish in the world of arcane features like this, and only just about now something you could reasonably rely upon existing in all enterprise environments), so it seems some people thought it would still be useful.
The only big-endian popular arch in recent memory is PPC
And it's not a hole. We're not about to spend 100 cycles parsing a decimal string that could have been a little-endian binary number, just because you feel a dependency on a certain endianness is architecturally impure. Know what else is architecturally impure? Making binary machines handle decimal.
No? Most ARM is little endian.
> Just look at Python's pickle: it's a completely insecure serialization format. Loading a file can cause code execution even if you just wanted some numbers... but still very widely used because it fits the mix-code-and-data model of python.
Like, are they saying it's bad? Are they saying it's good? I don't even get it. While I was reading the post, I was thinking about pickle the whole time (and how terrible that idea is, too).
The myth that it was a gift from Gods doing stuff nothing else can make it, persists.
And even on the languages that don't, it isn't if as a tiny Assembly thunk is the end of the world to write, but apparently at a sign of a plain mov people run to the hills nowadays.
Use the right tool for the job. I've always felt it's often the most efficient thing to write a bit of code in assembler, if that's simpler and clearer than doing anything else.
It's hard to write obfuscated assembler because it's all sitting opened up in front of you. It's as simple as it gets and it hasn't got any secrets.
For example, to have fast load times and zero temp memory overhead I've used that for several games. Other than changing a few offsets to pointers the data is used directly. I don't have to worry about incompatibilities. Either I'm shipping for a single platform or there's a different build for each platform, including the data. There's a version in the first few bytes just so during dev we don't try to load old format files with new struct defs. But otherwise, it's great for getting fast load times.
Unmentioned so far is that defaults for max live memory maps are usually much higher than defaults for max open files. So, if you are careful about closing files after mapping, you can usually get more "range" before having to move from OS/distro defaults. (E.g. for `program foo*`-style work where you want to keep the foo open for some reason, like binding them to many read-only NumPy array variables.)
Flatbuffers etc. is cool but they can be very bloaty and clunky.
These days, any C struct you built on amd64 will work identically on arm64. There really aren't any other architectures that matter.
And yes, managing concurrent access to shared resources requires care and cooperation. That has always been true, and has nothing specific to do with mmap.
The program you used to leave your comment, and the libraries it used, were loaded into memory via mmap(2) prior to execution. To use protobuf or whatever, you use mmap.
The only reason mmap isn’t more generally useful is the dearth of general-use binary on-disk formats such as ELF. We could build more memory-mapped applications if we had better library support for them. But we don’t, which I suppose was the point of TFA.
And if you load library A that references library B’s data and you change B’s data format but forget to update A, you crash horribly. Similarly, if you modify a shared library while it’s in use (your OS and/or your linker may try to avoid this), you can easily crash any process that has it mapped.
No need to add complexity, dependancies and reduced performance by using these libraries.
The code is not portable between architectures.
You can’t actually define your data structure. You can pretend with your compiler’s version of “pack” with regrettable results.
You probably have multiple kinds of undefined behavior.
Dealing with compatibility between versions of your software is awkward at best.
You might not even get amazing performance. mmap is not a panacea. Page faults and TLB flushing are not free.
You can’t use any sort of advanced data types — you get exactly what C gives you.
Forget about enforcing any sort of invariant at the language level.
The sizes of ints, the byte order and the padding can be different for instance.
But mmap() was implemented in C because C is the natural language for exposing Unix system calls and mmap() is a syscall provided by the OS. And this is true up and down the stack. Best language for integrating with low level kernel networking (sockopts, routing, etc...)? C. Best language for async I/O primitives? C. Best language for SIMD integration? C. And it goes on and on.
Obviously you can do this stuff (including mmap()) in all sorts of runtimes. But it always appears first in C and gets ported elsewhere. Because no matter how much you think your language is better, if you have to go into the kernel to plumb out hooks for your new feature, you're going to integrated and test it using a C rig before you get the other ports.
[1] Given that the pedantry bottle was opened already, it's worth pointing out that you'd have gotten more points by noting that it appeared in 4.2BSD.
The underlying syscall doesn't use the C ABI, you need to wrap it to use it from C in the same way you need to wrap it to use it from any language, which is exactly what glibc and friends do.
Moral of the story is mmap belongs to the platform, not the language.
https://github.com/AdaCore/florist/blob/master/libsrc/posix-...
No, C is the language _designed_ to write UNIX. Unix is older than C, C was designed to write it and that's why all UNIX APIs follow C conventions. It's obvious that when you design something for a system it will have its best APIs in the language the system is written in.
C has also multiple weird and quirky APIs that suck, especially in the ISO C libc.
Uh, no. C intrinsics are so much worse than just writing assembly that it's not even comparable.
Pure tripe. https://www.symas.com/post/are-you-sure-you-want-to-use-mmap...
My interpretation always was the mmap should only be used for immutable and local files. You may still run into issues with those type of files but it’s very unlikely.
(You still need to be careful, of course.)
tldr; it's very different.
I would simply not mmap this.
> If you're not prepared to handle a memory access exception when you access the mapped file, don't use mmap.
fread can fail too. I don't know why you would be prepared for one and not the other.
In my experience, all these things just cause whatever process is memory mapping to freeze up horribly and make me regret ever using a network file system or external hard drive.
Most I/O calls return errors when reads or writes fail, but NFS, for example, would traditionally block on network errors by default — you probably don't want your entire lab full of diskless workstations to kernel panic every time there's a transient network glitch.
You also have the issue of multiple levels of caching and when and how to report delayed errors to programs that don't explicitly use mechanisms like fsync.
All these methods are in the standard library, i.e. they work on all platforms. The C code is specific to POSIX; Windows supports memory mapped files too but the APIs are quite different.
https://learn.microsoft.com/en-us/dotnet/standard/io/memory-...
When I was first taught C formally, they definitely walked us through all the standard FILE* manipulators and didn't mention mmap() at all. And when I first heard about mmap() I couldn't imagine personally having a reason to use it.
The difference between slurping a file into malloc'd memory and just mmap'ing it is that the latter doesn't use up anonymous memory. Under memory pressure, the mmap'd file can just be evicted and transparently reloaded later, whereas if it was copied into anonymous memory it either needs to be copied out to swap or, if there's not enough swap (e.g. if swap is disabled), the OOM killer will be invoked to shoot down some (often innocent) process.
If you need an entire file loaded into your address space, and you don't have to worry about the file being modified (e.g. have to deal with SIGBUS if the file is truncated), then mmap'ing the file is being a good citizen in terms of wisely using system resources. On a system like Linux that aggressively buffers file data, there likely won't be a performance difference if your system memory usage assumptions are correct, though you can use madvise & friends to hint to the kernel. If your assumptions are wrong, then you get graceful performance degradation (back pressure, effectively) rather than breaking things.
Are you tired of bloated software slowing your systems to a crawl because most developers and application processes think they're special snowflakes that will have a machine all to themselves? Be part of the solution, not part of the problem.
It's simple, I'll give it that.
It also often saves at least one copy operation (page cache to/from an application-level byte array), doesn't it?
I'm not sure what the author really wants to say. mmap is available in many languages (e.g. Python) on Linux (and many other *nix I suppose). C provides you with raw memory access, so using mmap is sort-of-convenient for this use case.
But if you use Python then, yes, you'll need a bytearray, because Python doesn't give you raw access to such memory - and I'm not sure you'd want to mmap a PyObject anyway?
Then, writing and reading this kind of raw memory can be kind of dangerous and non-portable - I'm not really sure that the pickle analogy even makes sense. I very much suppose (I've never tried) that if you mmap-read malicious data in C, a vulnerability would be _quite_ easy to exploit.
And if you want to go farther back, even if it wasn't called "mmap" or a specific function you had to invoke -- there were operating systems that used a "single-level store" (notably MULTICS and IBM's AS/400..err OS/400... err i5 OS... err today IBM i [seriously, IBM, pick a name and stick with it]) where the interface to disk storage on the platform is that the entire disk storage/filesystem is always mapped into the same address space as the rest of your process's memory. Memory-mapped files were basically the only interface there was, and the operating system "magically" persisted certain areas of your memory to permanent storage.
Did I claim something different? I just didn’t use that feature on other OSes.
Also using mmap is not as simple as the article lays out. For example what happens when another process modifies the file and now your processes' mapped memory consists of parts of 2 different versions of the file at the same time. You also need to build a way to know how to grow the mapping if you run out room. You also want to be able to handle failures to read or write. This means you pretty much will need to reimplement a fread and fwrite going back to the approach the author didn't like: "This works, but is verbose and needlessly limited to sequential access." So it turns out "It ends up being just a nicer way to call read() and write()" is only true if you ignore the edge cases.
https://docs.oracle.com/javase/8/docs/api/java/nio/MappedByt...
https://learn.microsoft.com/en-us/dotnet/api/system.io.memor...
https://learn.microsoft.com/en-us/windows/win32/memory/creat...
Memory mapping is very common.
> Look inside
> Platform APIs
Ok.
I agree platform APIs are better than most generic language APIs at least. I disagree on mmap being the "best".
If the real goal of TFA was to praise C's ability to reinterpret a chunk of memory (possibly mapped to a file) as another datatype, it would have been more effective to do so using C functions and not OS-specific system calls. For example:
FILE *f = fopen(...);
uint32_t *numbers;
fread(numbers, ..., f);
access numbers[...]
frwite(numbers, ..., f);
fclose(f);But I agree that it's a bizarre article since mmap is not a C standard, and relies on platform-dependend Operating System APIs.
C has those too and am glad that they do. This is what allows one to do other things while the buffer gets filled, without the need for multithreading.
Yes easier standardized portable async interfaces would have been nice, not sure how well supported they are.
The other palatable way is to register consumer coroutines on a system provided event-loop. In C one does so with macro magic, or using stack switching with the help of tiny bit of insight inline assembly.
Take a look at Simon Tatham's page on coroutines in C.
To get really fast you may need to bypass the kernel. Or have more control on the event loop / scheduler. Database implementations would be the place to look.
I agree that the model of mmap() is amazing, though: being able to treat a file as just a range of bytes in memory, while letting the OS handle the fussy details, is incredibly useful. (It's just that the OS doesn't handle all of the fussy details, and that's a pain.)
It's unsafe though.
You also need to be careful to not have any pointers in the struct (so also no slices, maps); or if you have pointers, the must be nil. Once you start unsafe-retyping random bytes to pointers, thing explode very quickly.
So maybe this article has a point.
When a developer that usually consumes the language starts critiquing the language.
I could go on as to why it's a bad signal, psychologically, but let's just say that empirically it usually doesn't come from a good place, it's more like a developer raising the stakes of their application failing and blaming the language.
Sure one out of a thousand devs might be the next Linus Torvalds and develop the next Rust. But the odds aren't great.
> This simply isn't true on memory constrained systems — and with 100 GB files — every system is memory constrained.
I suppose the author might have a point in the context of making apps that constantly need to process 100GB files? I personally never have to deal with 100GB files so I am no one to judge if the rest of the article makes sense.
So if you wanted to handle file read/write errors you would need to implement signal handlers.
https://stackoverflow.com/questions/6791415/how-do-memory-ma...
It's also interesting to me that many noSQL starts from the assumption that relations are too complex, and that trees are preferred.
This doesn't say much about the horizontal scaling that NoSQL systems were really designed for, but most people getting on that train didn't need that kind of scale.
int len = 1000;
int file = open("numbers.void", O_RDWR | O_CREAT, 0600);
ftruncate(file, len);
void\* buf = mmap(NULL, len,
PROT_READ | PROT_WRITE, MAP_SHARED,
file, 0);
// Treat mmapped buffer as a heap
initialize_heap(buf);
// Manage dynamically-sized array on disk
int* my_array = malloc(sizeof(int) * 8, buf);
my_array = realloc(my_array, sizeof(int) * 16, buf);
free(my_array, buf);
Protobuf, Python pickle, etc can all handle dynamic memory that gets flattened when you want to serialize.POSIX has the best API. C has `fopen` which, while not terrible, isn't what I'd call "great"
Surely its "bring your own allocator" paradigm also allows this.
No it doesn't. If you have a file that's 2^36 bytes and your address space is only 2^32, it won't work.
On a related digression, I've seen so many cases of programs that could've handled infinitely long input in constant space instead implemented as some form of "read the whole input into memory", which unnecessarily puts a limit on the input length.
I’ve seen otherwise competent developers use compile time flags to bypass memmap on 32-bit systems even though this always worked! I dealt with database engines in the 1990s that used memmap for files tens of gigabytes in size.