Source: UNIX: A History and a Memoir Paperback – October 18, 2019 by Brian W Kernighan (Author)
[1] SELinux is unmanageable; just turn it off if it gets in your way:
Right. Almost nothing does.
You see, it's https://en.m.wikipedia.org/wiki/Turtles_all_the_way_down
https://www.youtube.com/watch?v=wqI7MrtxPnk
By the way the CHM oral history video series is full of gems.
Sometimes small and simple is good.
I'll have to check because my memory is failing me atm.
Would love any resources that goes in more details, if any HN-er or the author himself knows of some!
https://www.amazon.com/Advanced-Programming-UNIX-Environment...
It is about using all Unix APIs from user space, including signals and processes.
(I am not sure what to recommend if you want to implement signals in the kernel, maybe https://pdos.csail.mit.edu/6.828/2012/xv6.html )
---
It's honestly a breath of fresh air to simply read a book that explains clearly how Unix works, with self-contained examples, and which is comprehensive and organized. (If you don't know C, that can be a barrier, but that's also a barrier reading blog posts)
I don't believe the equivalent information is anywhere on the web. (I have a lot of Unix trivia on my blog, which people still read, but it's not the same)
IMO there are some things for which it's really inefficient to use blog posts or Google or LLMs, and if you want to understand Unix signals that's probably one of them.
(This book isn't "cheap" even used, but IMO it survives with a high price precisely because the information is valuable. You get what you pay for, etc. And for a working programmer it is cheap, relatively speaking.)
If all syscalls are async (a design principle of many modern OSes) then that aspect is solved. And if there is a reliable channel-like system for IPC (also a design principle of many modern OSes) then you can implement not only signals but also more sophisticated async inter-process communication/procedure calls.
See the original comment [0] for slighlty more spellt out ideas on better designs for those three-and-a-half concepts.
You have SIGSTOP/SIGCONT/SIGKILL, which don't even really signal the process, they just do process control (suspend, resume, kill).
You have simple async messages (SIGHUP, SIGUSR1, SIGUSR2, SIGTTIN, SIGTTOU, etc) that get abused for reloading configuration/etc (with hacky workarounds like nohup for daemonization) or other stuff (gunicorn for example uses the latter 2 for scaling up and down dynamically). There's also in this category bizarrely specific things like SIGWINCH.
You also have SIGILL, SIGSEGV, SIGFPE, etc for illegal instructions, segmentation violations, FP exceptions, etc.
And also things that might not even be good to have as async things in the first place (SIGSYS).
---
As an aside, it's not the only approach and there's definitely tradeoffs with the other approaches.
Windows has events, SEH (access violations, other exceptions), handler routines (CTRL+C/CTRL+BREAK/shutdown,etc), and IOCPs (async I/O), callbacks, and probably some other things I'm forgetting at the moment.
Plan 9 has notes which are strings... which lets you send arbitrary data to another process which is neat, but it using the same mechanism for process control imo has the same drawbacks as *nix except now they're strings instead of a single well-defined number.
If you're including all that other stuff, it's probably fair to include all of the subsequent development of notification mechanisms on the UNIX side of the fence as well; e.g., poll(2), various SVR4 IPC primitives, event ports in illumos, kqueue in FreeBSD, epoll and eventually io_uring in Linux.
It goes into the problems with Unix signals, and then explains why Linux's attempt to solve them, signalfd, doesn't work well.
How does Windows handle this? There's still signals, but I believe/was under the impression that signals in Windows are an add-on to make the POSIX subsystem work, so maybe it isn't as broken (for example, I think it doesn't coalesce signals).
https://pubs.opengroup.org/onlinepubs/009604499/functions/bs...
It's important to remember that code in a signal handler must be re-enterant. "Nonreentrant functions are generally unsafe to call from a signal handler."
As a baseline, I support developers using whatever license they would like, and targeting whatever operating systems, indeed, writing whatever code they would like in the process.
That doesn't make this specific policy a good idea. Even FSF, generally considered the most extreme (or, if you prefer, principled) exponents of the Free Software philosophy, support Windows and POSIX. They may grumble and call it Woe32, but Stallman has said some cogent things about how the fight for a world free of proprietary software is more readily advanced by making sure that Free Software projects run on proprietary systems.
They do at least license the library code under MPL, so merely using Hare doesn't lock you into a license. But I wonder about the longevity of a language where the attitude toward 95+% of the desktop is "unsupported, don't ask questions on our forums, we don't want you here".
Ironically, a Google search for "harelang repo" has as the first hit an unofficial macOs port, and the actual SourceHut repo doesn't show up in the first page of results.
Languages either snowball or fizzle out. I'm typing this on a Mac, but I could pick up a Linux machine right now if I were of a mind to. But why would I invest in learning a language which imposes a purity test on developers, when even the FSF doesn't? A great deal of open source and free software gets written on Macs, and in fact, more than you might think on Windows as well.
From where I sit, what differentiates Hare from Odin and Zig, is just this attitude of purity and exclusion. I wish you all happy hacking, of course, and success. But I'm pessimistic about the latter.
On the other hand, that is hardly the only thing from the FAQ that raises one's eyebrows:
> we have no package manager and encourage less code reuse as a shared value
> qbe generates slower code compared to LLVM, with the performance ranging from 25% to 75% the runtime performance of comparable LLVM-generated code
> Can I use multithreading in Hare? Probably not.
> So I need to implement hash tables myself? Indeed. Hash tables are a common data structure that many Hare programs will need to implement from scratch.
As it stands, this is definitely not a language designed for mass adoption. Which is fine, and at least they're upfront about it.
While I understand your concerns, I disagree with your the idea of “imposition”. Someone doing something for free doesn’t owe anyone to do it in a particular way (as long as it’s not malevolent). You’re free to express your opinion, but if the developer has already established his guidelines, criticisms like this is not constructive.
Not every band has to hit the Billboard charts to be worth listening to.
Supporting an OS the devs don’t use is a big ask.
This is not true and a naive statement. There are quite few languages which are not popular across the board but have a very firm niche in which they thrive and fulfill critical roles.
So I get it. Especially if it is to be a more niche or pet project but then again I don't buy the ideological reason. I am a really big proponent of free software and their stance just doesn't make any sense. I agree with you here. But then again they can do whatever they want.
I believe Apple could probably get away with keeping Swift proprietary, or only supporting Apple platforms. But they don't. I have no inside-track information on why that is, but I suspect the reason is fairly simple: developers wouldn't like it.
I understand that you don't like it, but how do you come to regard a statement like this as "arbitrary?" It's exclusive, for sure. "Purity test" is one way to characterize it. But do you really think that statements like this are just the product of individual caprice? That it's not someone's attempt at a principled intervention, but just an "attitude?"
The Hares are saying they require that, which I totally understand and respect.
They just don't want to maintain Mac/Windows ports themselves. If somebody else is interested, they can maintain a port. Like that macOS one that you've already found.
Example of “creating something impressive in X days” requires a lot of experience and talent that is built over years.
I put the English string in the catalog, updated a number of tests, run the tests on the local system, pushed the change to staging cluster, fix unanticipated test failures, push the change to production, contact the translators to have the string translated to a number of languages, and have documentation updated.
Buy anyway there's no "then vs now" when you are really comparing "prototype" to "deliver to users". It took Unix decades to get those strings translated.
https://www.ticalc.org/archives/files/fileinfo/463/46387.htm...
Sadly defunct. I guess the real OS was the syscalls we made along the way.
I am not a programmer today, but I can still wrap most of my head around many low level concepts. I can't, however, write anything resembling a modern web page. Nor can I understand how any larger JS application works.
https://en.wikipedia.org/wiki/Not_Another_Completely_Heurist...
GPLv3 license.
That answered my initial surprise of clicking on the ISO and getting a 60MB download.
For comparison, Linux 0.01 was a 71k download, but contained only the kernel source.
Though this limitation will limit its adoption in this multicore age I think:
From the FAQ https://harelang.org/documentation/faq.html
....
Can I use multithreading in Hare?
Probably not.
We prefer to encourage the use of event loops (see unix::poll or hare-ev) for multiplexing I/O operations, or multiprocessing with shared memory if you need to use CPU resources in parallel.
It is, strictly speaking, possible to create threads in a Hare program. You can link to libc and use pthreads, or you can use the clone(2) syscall directly. Operating systems implemented in Hare, such as Helios, often implement multi-threading.
However, the upstream standard library does not make reentrancy guarantees, so you are solely responsible for not shooting your foot off.
This is actually pretty powerful. I personally prefer it for most purposes, because it restricts the possibility of data races to only the shared memory regions. It's a little like an "unsafe block" of memory with respect to data races.
Well, that not only rules out multi-threading, but also usage in interrupts. Quite a limitation for a "systems programming language" methinks.
> google/syzkalleR
> Fuschia / Zircon syscalls: https://fuchsia.dev/fuchsia-src/reference/syscalls
"Memory Sealing "Mseal" System Call Merged for Linux 6.10" https://news.ycombinator.com/context?id=40474551
"I called it Linux originally as a working name. That was just because "Linus" and the X has to be there--it's UNIX, it's like, a law--and what happened was that I initially thought that I can't call it "Linux" publicly because it's just too egotistical. That was before I had a big ego."
Seems a bit like Python’s philosophy of not introducing too much optimizations to prevent the runtime complexity from spiraling out of control.
Plus it is a heavy dependency which means projects like writing a self-hosting OS in a month are much less realistic to achieve when your compiler relies on LLVM.
And not the least, the code generation is pretty slow. If your languages cares greatly about compile speed, which it should, this is a bummer.
So yeah, for many projects avoiding LLVM might be a good idea.
Like PC bootstrap, basic kernel action loops, process forking, yada yada