If structured data was embraced we would have developed appropriate tooling to interact with it in the way that we prefer.
This runs very deep in unix and a lot of people are too "brainwashed" to think of other ways. Instead they develop other exotic ways of dealing with the problem.
Oh you don't like that output? Easy! pipe that crap into sed then awk then perl then cut then wc and you're golden!
When you get tot that point you have to understand that you have chosen to ignore the fact that the data you are dealing with must be represented in something much closer to a relational database than lines of ASCII text.
Logging is another area you see the consequences of this. A log is not a line of text. Repeat after me a log entry is not a damn line of text.
"Oh but isn't it neat you can pipe it to grep?" NO! No it's not neat, maybe it was neat 20 years ago. Today I want that damn data in a structure. Then you can still print it out in one line and pipe it to grep all you want.
Another area that you see the unfortunate side effects of this philosophy is with the mess of file-based software configuration.
Yes I get it, you like your SSH session and Emacs/Vim blah blah but that's short-sighted.
I want my software configuration stored in a database not in a bunch of fragile files with made up syntax that are always one typo or syntax error away from being potentially silently ignored.
The fetish for easily-editable ASCII files and escaping from structure is holding us back. Structured data does not automatically imply hidden and inaccessible, that's a matter of developing appropriate tooling.
All settings in a database and not text files (the registry); a command-line that pipes data, not text (PowerShell). Tailored UIs to change settings, not magic invocations and obsure text file syntaxes.
I guess most developers on HN are also aware of the downsides of this philosophy. If not, try configuring IIS.
Imagine if the default wasn't bash, but something like Ruby + pipes (or some other terse language).
What is the argument for shell scripts not working on typed objects? How much time has been lost, how many bugs have been created because every single interaction between shell scripts has to include its own parser. How many versions of " get file created timestamp from ls" do we need?
Something Windows does get right is the clipboard. You place a thing on the clipboard, and when you paste, the receiving program can decide on the best representation. This is why copy-pasting images has worked so magically.
I could see an alternative system where such a mechanism exists for shell programs.
Microsoft forgot (and still forget) the documentation for Windows - if you want a nice A-Z reference for the bootloader, or kernel, or shell or IIS's configuration file or half the command-line tools, you're usually out of luck. The official place is often an inaccessible, badly-written knowledgebase article written from a task-based, usually GUI-based perspective.
It never mattered that they'd implemented a more coherent system full of better ideas, because the only way they'd tell you about it is through the GUI.
The MSDN CDs from 20 years ago were really good for a complete programmer's reference, but 1) I'm not sure how well they kept that up and 2) I could never find anything as comprehensive for sysadmins.
Every time I come around a German installation of IIS or SQL Server I'm cringing. Googling the right solution and then trying to figure out how they've translated this option is something I can't stand.
Maybe the downsides are because of the execution?
PowerShell' object-oriented nature is great btw!
The amount of tooling that surrounds text is vast and has evolved over decades. You cannot replace that with a single database and call it better.
I can place the majority of my config files in git and version them. I can easily perform a full text search on any log regardless of its syntax. I can effortlessly relocate and distribute logs to different filesystems.
> I want my software configuration stored in a database not in a bunch of fragile files with made up syntax that are always one typo or syntax error away from being potentially silently ignored.
So you would like your configuration stored as a bunch of made up data structures? Databases do not make you immune from typos and syntax errors, ask anyone who has ever written a delete without a where clause.
And what happens when the giant, all-knowing database your system depends on has a bug or vulnerability? When something on my linux box breaks I can drop to a recovery mode or lower runlevel and fix it with a kernel and a shell as dependencies.
I think you would be a lot happier with a configuration management system (puppet, ansible et al) and some log infrastructure without having to completely redo the unix philosophy and the years of experience and fine-tuning that comes with it.
Or the amount of cruft.
> git
Any structured data can still be serialized and diff'd, but it isn't always the clearest. Where is the contrast here?
> made up data structures?
so standardize the non-text format
> Databases do not make you immune from typos
Depends on the constraints. There are few on text files, possibly excluding sudoers.
If you aren't sticking to good practice you can just as easily rm a text file.
> has a bug or vulnerability?
What happens when the kernel has a bug or vulnerability? There are quite a few mature db systems. Plus, all text files depend on the file-system, which is why you store root on something stable like ext (still depends on hdd drivers though, unless you have some some ramfs on boot).
> years of experience and fine-tuning
Can you describe specifically what that experience is, and what the "fine-tuning" is?
The fetish for easily-editable ASCII files and escaping
from structure is holding us back. Structured data does not
automatically imply hidden and inaccessible, that's a
matter of developing appropriate tooling.
Speaking as someone who's in the process of automating configuration management on Windows, I'll say that this is _much_ easier said than done. Imagine something like Active Directory Federation Services, which stores its configuration in a database (SQL Server) and offers a good API for altering configuration data (Microsoft.Adfs.PowerShell). Instead of using a generic configuration file template---something supported by just about every configuration management system, using a wide variety of templating mechanims---I must instead write a custom interface between the configuration management system and the AD FS configuration management API. Contrast that with Shibboleth, which stores its configuration in a small collection of XML files (i.e., still strongly typed configuration data). These I can manage relatively easily using my configuration management system's existing file templating mechanism---no special adapter required. I can easily keep a log of all changes by storing these XML files in Git. I can put human-readable comments into them using the XML comment syntax. The same goes for configuration files that use Java properties or JSON or YAML or even ini-style syntax, not to mention all the apps that have configuration files that amount to executable code loaded directly into the run-time environment (e.g., amavisd-new's config file is Perl code, SimpleSAMLphp's is PHP, portmaster's is Bourne shell, portupgrade's is Ruby, and so forth).In short, your configuration database scheme is like an executable, whereas text config files are like source code (literally, in some cases). I'd much rather work with source code, as it remains human readable while at the same time amenable to a variety of text manipulation tools. Databases and APIs are more difficult to worth with, especially from the perspective of enterprise configuration management.
Edit: See also http://catb.org/jargon/html/S/SMOP.html.
What kind of tooling might work with ad-hoc structured data and still getting all the tools to talk with each other like in Unix? How would it work without having to write input parsing rules, data processing rules, and output formatting/filtering/composing rules for each tool?
I suspect that the reason it's not very popular to pass around structured data is that it's damn difficult to make various tools understand arbitrary data streams. Conversely, the power of text is that the tools understand text, only text, and do not understand the context, i.e. the user decides what the text means and how it needs to be processed. Then the tools become generic and the user can apply them in any number of contexts.
There's an awful lot of JSON that gets passed around. That seems a reasonable compromise between readable text and some sort of structure.
How so? I'm one of those people that like my SSH session, and in my case vim and "blah blah blah". I've contributed to countless open source software packages that you likely use with this method, and so have tons of other developers. Nothing is broken here, things are working great for everyone who reads the manual and follows it.
> I want my software configuration stored in a database not in a bunch of fragile files with made up syntax that are always one typo or syntax error away from being potentially silently ignored.
apache, postfix, haproxy, even vim are certainly not prone to silently ignore anything, just to name a few.
> The fetish for easily-editable ASCII files and escaping from structure is holding us back.
Holding us back from what?
I am both a developer and an administrator, and I've had all the fun with solaris/aix configurations that are often not stored in plain text that I care to have. If you also have this experience, and still feel the way you do, then I'd love to hear more. Otherwise, your rant comes off as "your way is hard, and I don't want to learn it!"
Look at all the available structures available for that plain text you speak of... XML, JSON, YAML, the list goes on. You are free to use one of those, then you have that structure you crave. There are plenty of areas that could use revolution, but UNIX-like configuration files are not one of them. There is no problem here. If you are making typos or mis-configuring your software, then you have a problem of your own creation.
As I mentioned in another comment in my view the problem is that something like the configuration is shared both by the humans and the computers. Because of this we settle on something that is not optimal for either group.
We end up with something that is hostile to both the humans and the computers, just in different ways.
In fact the argument of people like you for ASCII config files exactly demonstrates my point. You are fighting for your human-convenience against the machines.
> Holding us back from what?
By embracing and acknowledging that the humans and computers are not meant to share a language we free ourselves from this push-pull tension between human vs machine convenience.
We can develop formats and tooling that respects its human audience, that doesn't punish the human for making small superficial syntax or typo errors and so on.
And we can finally step the hell out of the way of computers and let them use what is suitable for them.
And at that point you could still have your SSH session and Vim/Emacs and blah blah blah and you could still view and interact with stuff as plaintext if you wanted to.
> apache, postfix, haproxy, even vim are certainly not prone to silently ignore anything, just to name a few.
It's not always a matter of silently ignoring something but due to the nature of the task it is certainly very easy to shoot yourself in the foot doing something that isn't technically an error but wasn't your intention.
For example you can silently break your cron jobs by leaving windows-newlines in them.
Perfect example of humans and computers sharing a language that is hostile to the human.
BAD BAD human! You stupid human why do you use bad Windows invisible characters? Use good linux invisible characters instead that are more tasty for your almighty lord Linux.
The systemd journal works how you describe, and it is very painful to interact with. I'll take plaintext logfiles any day of the week.
It's fine if you want to interact with the log in ways that have been designed into it. But:
- it's harder to work out what you can delete to free up space in an emergency
- it's harder to get logrotate to do what you want
- it's harder to use "tac" to search through the log from the bottom up
> I want my software configuration stored in a database
So now you can't put comments in your config, you can't (as easily) deploy config with puppet, or in RPMs. You can't easily diff separate configs.
All the things that you mention can in theory be fixed over time.
The stuff I'm talking about is not for the next 6 months. It's not very meaningful to compare it against the current tools and landscape.
I can almost imagine a similar conversation in the past.
Someone saying "MAYBE ONE DAY WE CAN FLY!" and everyone's like "BUT OUR HORSES CAN ONLY JUMP 2 METERS HIGH! It would never work."
I understand your comment from a pragmatic point of view but none of those problems are big or important enough that we couldn't fix them in other ways.
Throwing away a rich structured piece of data and trading that for a dumb line of characters that needs to be re-parsed just so that it's easier to use logrotate and tac with them and so on is a losing trade.
Reminding me of one of the little niceties of Gobolinux.
You have a pristine pr version config copy sitting in the main dir, and a package-wide settings dir. It also provides a command (implemented as a shell script, as is most of Gobolinux tools) that gets run upon installing a new package version.
If said command detects a difference between existing and new config files it gives you various options. You can have it retain all the old files, replace them with the new files, or even bring up a merged file that give you the new lines as comments next to the old ones.
You are forgetting a crucial point: plain text is very well defined. Actually, it was already defined when the first Unix tools were being written. Using plain text means that you can use grep to search the logs of your program, even if your program was written yesterday and grep was written 40 years ago.
Structured data? In which format? Who will define the UNIQUE format to be used from now on for every tool? The same people who chose Javascript as the web programming language?
Do you realize that choosing plain text prevented any IE6-like monstrosity from happening?
Everyone can just boot into emacs :P.
UNIX philosophy is, as written in the article, is based on programs that do one thing well and work with other programs. The second part, "work with other programs" is the one that encourages (but not requires) simple, text based I/O.
If A, B and C write programs and independently design some custom, structured, binary I/O the chance of them being compatible is nil. If they output text, the UNIX glue of pipes and text conversions makes them cooperate quickly and efficiently. Not elegant? Sure. But working well in no time.
I guess our industry is meant to run in circles, only changing the type of brackets on each loop (from parens to curly on this iteration).
It's not always the best way to approach a problem, but it's not meant to be. It's duct tape. You use it where it's good enough.
People ran away screaming from it :)
I want to be able to look at your file format using tools that haven't been specialized to the task. Is that so wrong?
Second, you are right about structured data and all. The only thing is that it's either impossible or extremely hard to achieve. Many have tried, all of them failed. Windows now has a mix of registry, file and database configs which is a nightmare and is much worse than any unix. AIX has smitty and other config management solutions which are a bitch to work with if you want something non-trivial. Solaris is heading this direction (actually it's heading to the grave but it's another story) and it's also not nice. There are a lot of other OSes and programs which tried to do it but failed.
This is much like with democracy: it's a terrible form of ruling, too bad we have nothing better. This is exactly what's up with unix configs and data formats. It is possible to make some fancy format and tools which will achieve it's goal for like 80% of the time. But it will cause huge amount of pain in the ass in the rest 20% and this is where it will be ignored and you'll end up with a mix of two words which is worse than one.
I remember getting fairly excited when Apple OSX first came out and quite a few of the configuration files were XML-based. Finally, a consistent format, but it wasn't pervasive enough. Even Apple couldn't see fit to break with the past.
I've even contemplated rewriting some core utils as an experiment to spit out XML (because I didn't know about typed objects at the time), but I lack the skillset.
I know we can't (and maybe we shouldn't) change Unix and its derivatives. There's too much invested in the way things work to pull the rug out. But, when a new OS comes along that wants to do something interesting, I hope the authors will take a look at designing the interface between programs rather than just spitting out or consuming whatever semi-structured ball of text that felt right at the time.
Wouldn't it be neat, for example, if instead of 'ls' spitting out lines of text, which sometimes fit the terminal window, sometimes not, which contain a date field which is in the local user's locale format, which is in a certain column which is dependent on the switches passed to the command, instead you get structured, typed information, ISO-formatted date and time, etc. On the presentation side, you can make it look like any old listing from ls if you like, rather than mashing the data and presentation layer together. I'd like to imagine such a system would be more robust than one where we could never change the column the date was in for fear of breaking a billion scripts.
It's the shortest path from thinking "I need to persist this crap" to getting something working. Write the bytes to a file, sprinkle some separators, read and parse it back.
This is true. But when your needs aren't that complex, basic textual output sure is nice.
> The fetish for easily-editable ASCII files and escaping from structure is holding us back. Structured data does not automatically imply hidden and inaccessible, that's a matter of developing appropriate tooling.
Good plan. I'll set up a schema by which people can exchange data, and wse'll get it standardized. Given the complexity of the relationships involved - and the fact that I really don't know how my data will be used downstream of me - I'd better make it something really robust and extensible. Maybe some kind of markup language?
Then we can ensure that everyone follows the same standard. We can write a compatibility layer that wraps the existing text-only commands and transforms the data into this new extensible markup language (what to call it thought? MLX?). Then anyone who has the basic text tools can download the wrappers, learn the schema , and start processing output.
Then again, I could just do that grep | cut. The only thing I have to learn is the shape of the thing I'm grepping for, and the way to use cut - the basics take a few seconds, and no additional tooling is required. Best of all, chances are high that it'll work the same way 20 years from now (though likely with expanded options not current available).
There's a lot to be said for having simple tools that accept simple input and produce simple output.
This doesn't mean it's the only approach - databases and structured data absolutely have their places in modern CLI tooling - but that has no bearing on the value of an ASCII pipeline.
Pipes can transfer arbitrary data, so it's just the tools that you don't like, not the underlying mechanism.
I totally agree that Unix sux. We need a better philosophy. But you eventually have to come up with something that actually works in practice. I am still waiting.
I remember my excitement at the idea that things like CP/M and MSDOS running on personal computers were going to free us all from the tyranny of mainframe computers running things like Unix. We all know how that turned out. Everyone eventually just gave up and started emulating the top of a desk.
So Unix is good at messing with unstructured text? Good. Get back to me when you have something better that actually works.
Now, it's true that we should recognize where text is really inadequate, especially when indexing and searching is needed. Webpages, for example, should not be plain text.
I think the problem resides in programmers not being able to properly use and understand how a database works. Databases and their engines are black boxes, so it's normal if fewer developers want to to use DB like you say. Meanwhile dictionaries and B-tree are not very sophisticated algorithms, yet I see almost no programmers using them consciously. The less a programmer know about the tools he has in his hands, the less he will get benefits from it and thus he will start using easier things.
So really my thought is that the tools are not accessible enough. The concepts of file and database are so distant that it's completely impossible to work with both, but to me it should.
It may be that given proper tooling for database-driven configuration it could be visible and accessible, but the fact is, I haven't seen timing that pulled that off.
You missed the whole point about signal vs noise.
When I'm ALWAYS presented with a blob of something to decipher, it requires a context switch.
Nothing IS something, and it's a structured something.
Well, powershell did solve that problem...
Of course some websites had to do it in an even dumber way than the law asks for. Like slashdot: http://i.imgur.com/5Fp0nmo.png
This is what greets the French every time slashdot decides to forget you agreed to let them put cookies on your computer and you need to click continue before you can get to the actual website.
The law actually made it worse for the people it's supposed to protect (those who might refuse cookies for privacy?) because those warning then will stick around like glue if they can't give you a cookie to remember you accepted their existence.
It would have been nice if this was permitted to be violated only by GUIs. Ask about the first-time *nix experiences before the GUI-embracing era and one of the few things they were noticing was the continuous text-spitting. For instance, that was happening on boot and OS loading sequence (and still happens, now only being hidden by default with splash-screens), a lot of reporting about all the things that were performed successfully. It's funny in this regard seeing Unix' Rule of Silence being respected more... outside Unix, where is just common sense, with no need to be formulated as rule.
I think the rule makes sense within the specific constraints *nix programs are usually expected to work in (two output channels with no structure except the one informally defined by the program and the convention that the output should be human- and machine-readable at the same time) but I don't see it as a general rule if better ways to filter the output are available.
To be fair, this has been fixed a long time ago. At least Vim (which is the Vi installed on most systems) shows the following message on startup:
~ VIM - Vi IMproved
~
~ version 7.4.1829
~ by Bram Moolenaar et al.
~ [...]
~ Vim is open source and freely distributable
~
~ Help poor children in Uganda!
~ type :help iccf<Enter> for information
~
~ type :q<Enter> to exit
~ type :help<Enter> or <F1> for on-line help
~ type :help version7<Enter> for version info
On the other hand, it doesn't show this message when you call "vi" with a filename. But at least a beginner running "vi" for the first time should be taken care by this. $ man foo
*scroll to the end with the EXAMPLES section*
There should be an option for that. man --take-me-to-the-examples fooWow, I haven't seen such a blunt and unhelpful RFTM comment for a while. This comment is inappropriate in so many ways:
1) The unix systems have an inconsistent documentation mix of man pages, info pages, "-h", "-help", "--help", HTML docs, separate manuals (e.g. Debian Administrator's Handbook) and so on.
2) "man foo" leads to: "No manual entry for foo"
3) "man vi", as well as "man vim" both lead to a manpage that has no EXAMPLES section at all (see https://www.freebsd.org/cgi/man.cgi?query=vi, https://www.freebsd.org/cgi/man.cgi?query=vim)
4) The Vi(m) manpages explain only the command line arguments, not the editor commands. The latter are available by typing ":help" in the editor.
eg(){
MAN_KEEP_FORMATTING=1 man "$@" 2>/dev/null \
| sed --quiet --expression='/^E\(\x08.\)X\(\x08.\)\?A\(\x08.\)\?M\(\x08.\)\?P\(\x08.\)\?L\(\x08.\)\?E/{:a;p;n;/^[^ ]/q;ba}' \
| ${MANPAGER:-${PAGER:-pager -s}}
}
Usage:
$ eg tar
EXAMPLES
Create archive.tar from files foo and bar.
tar -cf archive.tar foo bar
List all files in archive.tar verbosely.
tar -tvf archive.tar
Extract all files from archive.tar.
tar -xf archive.tar
$The interesting thing here is that in order to make this argument people forget that at least some new users know the "Press F1 for help" dictum, because that particular part of Common User Access was drummed into them from the start of their encounter with computers. The new users press F1 in vim (not vi) and how to exit is the second and third items on the screen.
(Press F1 in actual vi, and, in some terminals at least, it very informatively inserts the letter "P" into the document. (-:)
I am getting really comfortable with unix and i can say that the transition on a mac is not bad! I started using it with only rudimentary knowledge about how to navigate the shell, but you really don't need it for most parts. When you start digging deeper you explore more and more of the unix-backend until it's like a second face to the computer. SSHing into a server is no inconvenience anymore etc.
I think this is the reason for developing simple, stupid GUI programs that help beginner do beginner stuff. They will (sooner or later) obtain knowledge of the terminal, but i think it's critical that the first steps are not too challenging.
Note: i am speaking about developers. I don't think non-developers need to know how to navigate the terminal. It just doesn't provide any value for them.
explains how to exit vi.
This is exactly my experience, too.
Some programs get this even worse: They spam you with lots of useless information, yet when something goes wrong, you don't get the information you need. Instead, you have to rerun it to increase the verbosity even more.
So ... if I have to rerun and having a hard time trying to reproduce the issue anyway, why did it spam me in the first place?
As an example: I love curl for piping the data to stdout per default, but I'm frequently annoyed by the progress bars I didn't ask for, especially if a script involves multiple curl commands.
FFmpeg is also quite bad here, but at least you can use -hide_banner and/or -loglevel to alias the problem away and mostly forget about it.
At its core, it is about putting humans before computers. Engelbart coined HCI as Human Computer Interface, not CHI. This philosophy steered my product designs ever since I read that as a teenager.
Granted, with too many options it could quickly get confusing (should this message go to stdout or stdinfo; is that message more informational or more debugging?), but I think that it could be managed.
Similarly, I think that Unix fell down by relying too much on unstructured text (in the sense that the structure isn't enforced, not in the sense that it's altogether absent): because of this, every single tool rolls its own format, and even very similar formats may have subtle incompatibilities.
I'd love to see a successor OS which builds on the lessons of Unix, Plan 9 and other conceptually brilliant OSes, but I fear the world will never see another successful operating system.
This is what made Unix last. Text and keyboards are the universal computing interface that has survived since the 1970s.
There's no particular reason why /etc/passwd couldn't be:
((root nil 0 0 root /root /bin/bash)
(daemon nil 1 1 daemon /usr/sbin /usr/sbin/nologin) …)
There are any number of similar dialects which could be used, of course, but the principle is obvious.An unfortunate turn of phrase. It's ambiguous. Do you mean last as in "endure"? Or do you mean last as in "last place"?
The worse thing about the unix philisophy implemented as a unix shell environment is that programs (often) have only one interface, and that's used both for interactive use and as a programming API. This means, for example, that when we realize git version N has terrible default behavior given some arugments, we can't fix the behavior of those arguments t in git version N+1 because we would break it's API.
And yes of course - structured in/out, sane encoding handling etc. is just missing.
Going back to the man page point: I will granted you that not everyone does check what a program does before running it. Sadly in those instances there's little you can do to protect them from themselves. It's similar to how you cannot protect people from blindly copying and pasting code from the internet. If someone is willing to run a command "blind" then the usefulness of the output is the least of their worries.
They could at least link to tAoUP[1].
[1] http://www.catb.org/esr/writings/taoup/html/ch01s06.html
User kagamine has read you comment.
User kagamine has processed your comment.
User kamamine came up with a barely witty response.
User kagamine has included the barely witty response for your amusement and irritation.
User kagamine hopes to not be banned for this trivial response.
Follow me on TwitFace for more like this and pictures of food.
This is the world we live in, the Rule of Silence seems like a unique and valuable asset to Unix in this day and age.
Acks are important.
It's not infuriating, it's just different.
http://www.catb.org/~esr/writings/taoup/html/ch01s06.html#id...
However the business and product developer in me wonders how I apply this to building more complex systems. Normally this involves building multiple functionalities. Does the philosophy say I shouldn't build "systems" that are complex and do multiple things? Or does it talk about how these should be implemented, as co-operating processes?
Reality: no unit exists outside the context of it's ecosystem.
Ergo, the API to that unit, and the degree to which it integrates with the other pieces is paramount.
The 'do one thing and do it well' ideal implies almost a kind of 'unit sovereignty' that in many cases does not exist.
In large, complex systems, 'units' can only view as parts of a greater whole.
'What they do' is almost less important than 'how well they fit'.
http://xahlee.info/UnixResource_dir/_fastfood_dir/fastfood.h...
I hadn't heard of it until now. Thanks for the good read!
Take an example. Build small components that can be reused. Its like SOA way before SOA came into use and makes perfect sense. Now criticizing that is much more difficult and will require technical depth than just dimissive comments about 'unix philosophy'.
In this case this is the first I've heard of a 'philosophy' of silence and it is often not golden. From a technical perspective its important for users to get feedback and not generic unhelpful error messages or commands disappearing. Fortunately on Linux logging is usually quite good and most experienced users can pinpoint errors quite quickly but options like -v, --v, -vv, -vvv far from helping often increase technical load.
Generally I think engineers need to fight the temptation to show off the importance and complexity of their software by spitting out all the unnecessary details and logs.
* good: easy to parse the result, easy to chain.
* bad: no progress report, annoying with long duration commands.
You expect it to finish at any time. It continuing is the surprise. A marker for its progression is thus, IMO well within the range of respecting this philosophy.
I suspect the reason is that for most signals, kill can't determine if the signal was acted upon or not. But for KILL and TERM it could wait a few milliseconds and then print if the process is still alive or not.
Edit: those who are saying that I can write a wrapper script is missing the point. The point of computers is to be useful to their users, not to follow some philosophy people invented almost 50 years ago. Like if someone is bothered by kill showing messages, they can write a wrapper script (kill > /dev/null how hard is that?) or beg the developers to add a -q option to kill (like grep has) or write a new tool for sending signals.
Also, the program is called KILL so one could be forgiven for assuming it's main purpose is to KILL things...
Unix makes it trivial to write a wrapper script around `kill` that does exactly what you want -- that's the entire point.
No, it is exactly the point.
Unix gives you Lego pieces to build things from.
A bunch of small Lego pieces you can join however you like is way more useful than a giant chunk of Lego that someone has glued together into whatever lump they happened to need that day.
When you ask why your Lego bucket didn't ship with pieces preglued into exactly the combination you want today, people absolutely will reply saying you have all the pieces you need and you can just join them yourself.
Because that is the entire point of Lego.
If that wasn't what you wanted, perhaps consider a different kind of toy?
That way, everybody wins. Both those who don't like Lego playing (me) and those who do (you).
> If that wasn't what you wanted, perhaps consider a different kind of toy?
I'd switch from GNU/Linux to something with a more sane design in a heartbeat! Unfortunately for me, the only free operating system began life as a Unix-clone. Thanks to an historical artifact more than a testament to the greatness of the "Unix philosophy".
Fortunately for me, there are a lot of people who have understood that thinking of an operating system as a jumbled bag of Lego bricks doesn't lead to good system design and are doing something about it (systemd and many more projects).
By the way, I haven't down-voted you, but I think I understand why others did. Before calling stupid a pillar of a (programming) philosophy with decades of useful outcomes, you should stop and ask yourself whether it's just a matter of taste on your side or, even more likely, of ignorance.
It wouldn't make sense:
1) the TERM signal can be trapped by the process, and "kill" has no right to assume that the process is supposed to quit;
2) the KILL signal is managed by the kernel and it just works (if it doesn't, then your kernel is buggy and you have more serious problems); even if "ps" shows the process as still alive after a "kill -9", you can assume it'd dead process walking.
I think this is the source of the problem. kill really is just for sending signals.
In the "Unix philosophy" there's no standard way to end processes, normally you send it SIGTERM and hope that it catches that, cleans up and exits. If it doesn't it could be a bug in the program, or maybe the program uses SIGTERM with slightly different semantics (eg Celery will wait for its children's tasks to finish, and does the "normal thing" of exiting as fast as possible when it gets two SIGTERMS); in any case I think the idea is that it's a weird, non-standard situation and the user needs to explicitly send SIGKILL (kill -9), since doing so might be dangerous.
This is the essence of 'small, single purpose programs that work well with each other' rule.
Even if nothing else, this rule/principle/whatever is my favourite by name :)
https://en.wikipedia.org/wiki/Principle_of_least_astonishmen...
- Contrary to popular belief, Unix is user friendly. It just happens to be very selective about who it decides to make friends with.