$RANDOM yields a random integer in the range 0..32767. (This feature was already there.)
$EPOCHSECONDS yields the whole number of seconds since the epoch.
$EPOCHREALTIME yields the number of seconds since the epoch with microsecond precision.
I'm thinking of a new shell feature that would allow the user to define similar variables. For example, I have $today set to the current date in YYYY-MM-DD format, and I have to jump through some minor hoops to keep it up to date.
Does anyone else think this would be useful enough to propose as a new bash feature? Would it create any potential security holes? Should variables like $PATH be exempted?
(Of course this doesn't add any new functionality, since I could use "$(date +%F)" in place of "$today". It's just a bit of syntactic sugar.)
$ today() { date +%F; }
$ echo Today, $(today) is a great day!
Today, 2019-01-08 is a great day!I do have a workaround for this particular case:
PROMPT_COMMAND='today=$(printf "%(%F)T\n" -1)'
but it only works in bash 4.2 and later.I could use
PROMPT_COMMAND='today=$(date +%F)'
but I'm trying to avoid executing an external command on every prompt. (Maybe the overhead is low enough that it's not worth worrying about.)My thoughts are (a) if user-defined special variables like this were a shell feature, I could find other uses for them and (b) it seems neater to make such a feature available to users rather than restricting it to three special-case built-in variables.
On the other hand, it might have been cleaner for $RANDOM, $EPOCHSECONDS, and $EPOCHREALTIME to be implemented as built-in functions rather than as special variables.
On the plus side, TIL the subshell syntax plays well with eval/expand shortcut (ctrl+alt+e).
dualbus@system76-pc:~$ ksh -c 'date=; date.get() { .sh.value=$(date +%s); }; echo $date; sleep 5; echo $date'
1546926637
1546926642
See: https://docs.oracle.com/cd/E36784_01/html/E36870/ksh-1.html ("Discipline Functions")The problem is more with a language having fady bits and people using them when they don't make the code clearer, than with syntactic sugar.
I'm reminded of the Jargon file that says "Syntactic sugar causes cancer of the semicolon."
PROMPT_COMMAND='date=$(date +%D);time=$(date +%T)' $ PROMPT_COMMAND='time=$(date +%T)'
$ echo $time;date +%T
12:19:07
12:20:19
Thus it will show the time after your last command returned rather than the current time.Deploy a lambda function in multiple regions (15 regions!) with just one bash script using Apex up.
Add route53 on top with Latency Based Routing enabled and you've a latency in 10s of millsecond from anywhere on the globe without paying a hefty fee for this kind of latency.
bash is a programming language like any other, and you could use anything with a REPL as your shell. Python should do. In fact, I'll try it right now..
Yes, it works. Just sudo chsh -s /usr/bin/python <username> and off you go.
Once you start doing this for a bit, you'll notice that the Python REPL is an incredibly poor UI for repeated execution of subprocesses. It is very elaborate. Having to constantly wrap your strings in quotes, calling subprocess modules, exec, painstaking handing of streams, etc.
Then you start looking for a language that has better syntax for calling external programs.. hmm...
Bash. Or zsh, or ksh, etc. These languages excel at it. But that's all they are: programming languages that happen to be super easy to use when it comes to starting external programs.
This is why it makes little sense to bind them to the OS. As far as the OS is concerned: there is no Bash. Just like there is no Python. There is just syscalls.
Python REPL, even with recent additions of TAB completion, is a poor REPL, period.
IPython, on the other hand, offers a much better programming environment than shell while still allowing easy access to most of the things you mention. Example:
In [1]: from pathlib import Path
In [2]: file = Path("~/.sbclrc").expanduser().read_text()
In [3]: !echo "$file" | awk '!/^;;/' # EDIT: to clarify, this shows passing
# Python variable to a shell pipeline.
#-quicklisp
(let ((quicklisp-init (merge-pathnames quicklisp/setup.lisp
(user-homedir-pathname))))
(when (probe-file quicklisp-init)
(load quicklisp-init)))
All things considered, between !, %, %% and /, IPython is a decent shell. I was using it for a few years as my main shell, actually - its QT Console implementation specifically. I was working on Windows back then, PowerShell didn't exist yet, and `cmd.exe` was... well, exactly the same as today.TLDR: a shell is just a REPL with a few conveniences for specific tasks. Re-implement these and suddenly any REPL becomes a workable shell.
This 2013 thread shaped a lot of the way I think about Unix: https://news.ycombinator.com/item?id=6530180
Syscalls, in general, are used in lieu of objects or other abstractions because they more accurate mirror what the underlying hardware is doing. This isn't always the case, some syscalls are maintained for POSIX-compatibility and add a lot of complexity to emulate behavior that is no longer reflective of the hardware.
At the end of the day, you'll find that it's very difficult to maintain the highest levels of performance while also presenting an API that has a high level of abstraction. Things like dynamically-resizable lists, using hash tables for everything, runtime metaprogramming, and other such niceties of modern HLLs aren't free from a performance perspective.
If you really want to know more, I would suggest reading one of McKusick books on operating system design (the most recent being The Design and Implementation of the FreeBSD Operating System 2/e, but even the older ones are still largely relevant).
Maintaining this "useless POSIX compatibility trap" has a certain amount of utility; I for one like not having to re-write all of my programs every few years. I imagine others feel the same.
In closing, some projects that are pushing the boundaries of OS design which you may want to check out include:
* Redox OS (https://www.redox-os.org/) - a UNIX-like OS done from scratch in Rust
* OpenBSD (https://www.openbsd.org/) - one of the old-school Unices, written in plain old C, but with some modern security tricks up it's sleeves
* Helen OS (http://www.helenos.org/) - a new microkernel OS written from scratch in C++, Helen OS is not UNIX-like
* DragonFlyBSD (https://www.dragonflybsd.org/) - a FreeBSD fork focused on file systems research
* Haiku (https://www.haiku-os.org/) - binary and source compatible with BeOS, mostly written in C++, but also has a POSIX compatibility layer
What should one be using instead in the current scenario?
Python, Rust, Golang?
Exactly. Unfortunately.
Unless we have a solution that is significantly better than POSIX/UNIX, switching to anything else incurs a significant cost that no one is willing to pay.
Short of that, we probably need a technology breakthrough that brings in a complete architectural change.
The problem is that it doesn't even suffice for the new thing to be "significantly better". Because of the huge sunk cost, the new thing needs to be able to do desirable things that a POSIX-compatible OS strictly cannot do. Otherwise, it will always be easier and faster to just glue another subsystem onto the Linux kernel and continue using that.
You've already lost my interest. Bash/Shell is incredibly powerful and doesn't need a higher-level abstraction. That is what programming languages and CLI tools are for.
As a scripting language, I loathe it and really don't understand its purpose. I always write shell scripts in POSIX shell for portability reasons. Most of the time I don't need to use any of Bash's features. In cases where advanced features are needed and portability is not a concern, there are other scripting languages much better suited for this (Python, Ruby, etc).
As an interactive shell, the only features I ever use are command history and tab completion. Bash is way too bloated for my use case (it's only a matter of time before the next Shellshock is discovered). Other lightweight shells are missing the couple of interactive features which I do use.
If anyone knows of a shell which meets my criteria of being lightweight but with command history and tab completion (paths, command names and command arguments), I'd really appreciate any suggestions. Otherwise I may have to look into extending dash or something.
I would also love to know the answer to this question. I am a big fan of shells and shell programming in general and POSIX shell in particular.
Only suggestion I currently have is ksh, of which there are a few implementations, ksh93 still developed (https://github.com/att/ast), pdksh from OpenBSD (of which there is a portable version here: https://github.com/ibara/oksh) and MirBSD ksh (https://www.mirbsd.org/mksh.htm).
Otherwise of interest is mrsh: https://github.com/emersion/mrsh which was recently mentioned by Drew DeVault in a blog post linked here: https://news.ycombinator.com/item?id=18777909.
EDIT: And by mentioning mrsh, I meant it as a better/easier base to extend to get what you are asking for.
Unfortunately zsh and fish are more bloated than bash, and dash and ksh are missing the features I use.
I've just found "yash" which looks like a nice compromise. I'm going to give that a try.
(Personally, I use zsh, but it's much more heavy than tcsh.)
Given:
#include <stdio.h>
int main(int argc, char *argv[]) {
printf("Usage: %s [OPTIONS]\n", argv[0]);
return 0;
}
Running it as `./dir/demo --help` gives: Usage: ./dir/demo [OPTIONS]
Put it somewhere in $PATH, and run it as `demo --help`, and it will give: Usage: demo [OPTIONS]
Perfect!But with a Bash script, argv[0] is erased, it sets $0 is set to script path passed to `bash` as an argument.
Given:
#!/bin/bash
echo "Usage: $0 [OPTIONS]"
Running it as `./dir/demo --help` gives: Usage: ./dir/demo [OPTIONS]
So good, so far, since the kernel ran "/bin/bash ./dir/demo --help". But once we get $PATH involved, $0 stops being useful, since the path passed to Bash is the resolved file path; if you put it in /usr/bin, and run it as `demo --help`, it will give: Usage: /usr/bin/demo [OPTIONS]
Because the call to execvpe() looks at $PATH, resolves "demo" to "/usr/bin/demo", then passes "/usr/bin/demo" to the execve() syscall, and the kernel runs "/bin/bash /usr/bin/demo --help".In POSIX shell, $0 is a little useful for looking up the source file, but isn't so useful for knowing how the user invoked you. In Bash, if you need the source file, you're better served by ${BASH_SOURCE[0]}, rendering $0 relatively useless. And neither has a way to know how the user invoked you... until now.
It's a small problem, but one that there was no solution for.
Some pedantry: it's actually not. The argv array is a completely arbitrary thing, passed by the caller as an array of strings and packed by the kernel into some memory at the top of the stack on entry to main(). It doesn't need to correspond to anything in particular, the use of argv[0] as the file name of the program is a side effect of the way the Bourne shell syntax works. The actual file name to be executed is a separate argument to execve().
In fact there's no portable way to know for sure exactly what file was mapped into memory by the runtime linker to start your process. And getting really into the weeds there may not even be one! It would be totally possible to write a linker that loaded and relocated an ELF file into a bunch of anonymous mappings, unmapped itself, and then jumped to the entry point leaving the poor process no way at all to know where it had come from.
#!/bin/bash
echo "Usage: $(basename $0) [OPTIONS]"Changing argv[0] would make utilities like ps show a more descriptive/shorter name, eg in the case of long command paths.
And to do what you describe, there's `exec -a NAME' already:
$ (exec -a NOT-BASH bash -c 'echo $0; ps -p $BASHPID -f')
NOT-BASH
UID PID PPID C STIME TTY TIME CMD
dualbus 18210 2549 0 19:30 pts/1 00:00:00 NOT-BASH -c echo $0; ps -p $BASHPID -fYes, 2008.
I would generally describe a GNU/Linux distro as being a "giant pile of shell scripts". That's a little less true with init scripts generally now being systemd units. But that's where I'd start: Look at the code that distros write, that isn't part of some other upstream software.
- Arch Linux's `makepkg` https://git.archlinux.org/pacman.git
- Arch Linux's `mkinitcpio` https://git.archlinux.org/mkinitcpio.git/
- Arch Linux's `netctl` https://git.archlinux.org/netctl.git/
- Downstream from Arch, Parabola's "libretools" dev tools package https://git.parabola.nu/packages/libretools.git/ (disclaimer: I'm the maintainer of libretools)
Gentoo also has a lot of good shell scripting to look a, but it's mostly either POSIX shell, or targets older Bash (their guidelines http://devmanual.gentoo.org/tools-reference/bash/index.html say to avoid Bash 3 features). I tend to believe that changes made to the Bash language over the years are enhancements, and that they let you write cleaner, more robust code.
Read it! It's great. But know that a lot of bash scripting isn't really in bash, it's really required to be proficient with grep, sed, cut, dc and a few other text processing utilities. Learn those! Then are are a few other tricks to be mindful of: be mindful of spaces (wrap your variable substitution in double quotes, mostly), be mindful of sub-shells (piping to something that sets variables can be problematic), and a few other things that can really only be learned by reading good code.
But it's also good to know when you shouldn't venture further. My rule of thumb is when I'm using arrays or hash maps, then it's a good idea to move to another language. That's probably Python nowaways. A lot of people use tons of awk or perl snippets inside their bash scripts, that can also be a sign that it's time to move the whole script over.
If you have more logic than a couple of string comparisons, Bash is not the right tool for the job.
Like others say, "bash" is a hard tool to get right (and I'm not saying I do it right either, necessarily, but Greg's Wiki was real helpful!). I'm building a hybrid bash/python3 environment now (something I'll hopefully open-source at some point), and bash is just the "glue" to get things set up so most aspects of development can funnel through to python3 + other tools.
But ... things that make bash real useful:
* it's available everywhere (even in Windows with Ubuntu-18.04/WSL subsystem)
* it can bootstrap everything else you need
* it can wrap, in bash functions, aliases, and "variables" (parameters), the
real functionality you want to expose ... the
guts can be written in python3 or other tools
Without a good bash bootstrap script you end up writing 10 pages of arcane directions for multiple platforms telling people to download 10 packages and 4 pieces of software per platform, and nobody will have consistent reproducible environments.EDIT: I think there's a revised version of Greg's Bash Wiki in the works.
Python, sure, but replacing Bash with Node just seems like replacing a language crippled by its need to be backward compatible with (Bourne) sh with a language crippled by its need to be backward compatible with what Brandon Eich came up with in two weeks in 1995.
You're forgetting that bash really just calls other code. So you can combine two language's stdin/out functionality (eg, Python or Node) to pair code together. Sure, it won't be fast, but it can do quick wonders as an ad hoc data pipeline.
Keep them short. There are always exceptions but the "do one thing" mantra is handy, they can always be wrapped into a more complex workflow with a bigger script. None of my frequently used ones are over 100 LoC.
Write them for you and you alone when possible, start off simple and iterate. Fight that developer urge to solve a generalized problem for everyone.
Embrace the environment and global environment variables. We're trained to avoid global variables like the plague but they are really useful in scripts. My auto complete scripts that I mentioned above, they know which database to connect to based off the environment variable and there are separate commands to switch environment.
Make sure you aren't using it where things like make would be more appropriate.
* use traps http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_12_02.htm...
* read about safe ways to do things in bash https://github.com/anordal/shellharden/blob/master/how_to_do...
* this is pretty helpful writeup of general CLI usage, yet not bash specific https://github.com/jlevy/the-art-of-command-line (related HN discussion https://news.ycombinator.com/item?id=9720813)
https://tiswww.case.edu/php/chet/readline/readline.html#SEC1...
Readline is the library bash uses for editing the input line, and it has some nice movement keys. For example Alt-b moves the cursor back a word, Ctrl-u deletes to the beginning of the line, Ctrl-w removes one word behind the cursor.
They work in a bunch of other programs, like the Python interpreter's interactive mode for example.
Often, this is a good sign that you might want to switch to another scripting language.
also, https://www.shellcheck.net/ not reading, but pretty neat. static code analysis for shell scripts, points out common errors.
I would highly recommend BashGuide[2] and ryanstutorials[3] as a starting point. After that, go through rest of the wooledge site for FAQs, best practices, etc
shellcheck[4] is awesome for checking your scripts for potential downfalls and issues
[1] https://github.com/learnbyexample/scripting_course/blob/mast...
[2] https://mywiki.wooledge.org/BashGuide
I also had a storage controller go and had to figure out how to use shell builtins to create a tmpfs and reflash the firmware.
There are many reasons to provide builtins for basic file commands, from saving the extra process start to the tiny performance boost for scripts that should probably be using find instead.
It's a minor performance optimization that might be useful if you're doing thousands of rm's or stat's in a script.
That's probably a good sign you should advance from shell. If the script is trivial, it's going to be trivial in python / ruby / go / crystal / ... as well. If it's not trivial, that's another reason to move.
Alternatively - newer versions of ZSH are frequently provided by Apple.
brew install bashAt last!
> New features [...] BASH_ARGV0: a new variable that expands to $0 and sets $0 on assignment.
/bin/bash --version
GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin18)
Copyright (C) 2007 Free Software Foundation, Inc.
macOS used to be an awesome developer machine with good tools out of the box. Now the built-in tools are just a bootstrap for their own replacement via Homebrew. Like IE for downloading Chrome.I do think that if Apple can't ship a current version (for whatever reason), they probably shouldn't have it pre-installed at all, much like the BSD's don't come with bash by default either. Maybe they could ship with ksh/pdksh/mksh or something instead.
I assume that Apple's reasoning is similar.
I switched to zsh, personally - the one Apple ship is pretty current.
macOS is still a pretty solid developer machine.
(But it certainly suggests to me that macOS isn't actually the easiest out-of-the-box solution if your workflow includes anything more than web browsing.)
brew install bash
echo /usr/local/bin/bash | sudo tee -a /etc/shells
chsh -s /usr/local/bin/bash
and if you're on macOS and still haven't heard of homebrew, you first need to install it with /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"Also- there is ammonite. Written for scripting.
In practice, this approach often results in code that works only on Linux, or on Linux and macOS. I especially hate it when people shove #!/bin/bash as a shebang, and doubly so when it's in scripts that are a part of some npm package.
(On BSDs, Bash is not installed out of the box, and when it is installed, it's not in /bin, since it's not in the base system.)
Still, though, that's a good reason to stick to the original Bourne shell. Powershell, whatever its advantages may be, simply has too heavy dependencies on Linux, since you're dragging in CLR. At that point, like you said, might as well just use Perl or Python for scripting.
The problem with powershell as a scripting language, is it's designed to also be used as a shell. Which means, literals can be strings, function parameters are not separated, etc. It reminds me of Tcl in the 90s.
I really like what the powershell environment provides (like cmdlets, .NET) - but the language itself is dismal.
I would have preferred to have seen an existing scripting language, with .NET bindings added to it... the "NIH syndrome".
program | tee filename.txt &
It's also too verbose and needs too much shift for me to comfortably use as a shell. So why not just use a 'real' language if I'm going to write a script?