Citation: https://archive.org/details/1991-proceedings-tech-conference... but note that the explanation of stream pools there is a little less precise and more general than really necessary. I believe that later versions of sfio simplified things somewhat, though I could be wrong. (I find their code fairly hard to read.)
Anyhow, ISTM a missed opportunity when new languages that don't actually use libc's routines for something reinvent POSIX's clunkier aspects.
> The FIXME comment shows the Rust team acknowledges that ideally they should check if something is executed in TTYs or not and use LineWriter or BufWriter accordingly, but I guess this was not on their priority list.
This does not inspire confidence.
/// The returned handle has no external synchronization or buffering layered on top.
const fn stdout_raw() -> StdoutRaw;fwrite only buffers because write is slow.
make it so write isn't slow and you don't need userspace buffering!
Buffering should basically always be: “Work or Time” based, either you buffered enough or enough time has passed. This is because you buffer when per-element latency starts bottlenecking your throughput.
If you have so little data that your throughput is not getting limited, then you should be flushing.
I think the core question is whether some middle layer of output processing between program and sink/display could be created that knows enough about (using terminals as an example sink) raw mode/console dimensions/buffering to make most programs display correctly enough for most users without knowing specifics about the program writing the output's internals. If that can be done, then programs that need more specifics (e.g. complex animated/ncurses GUIs) could either propose overrides/settings to the output middleware or configure it directly, and programs that don't wouldn't.
That's possible to implement, sure, but can that be done without just reinventing the POSIX terminal API, or any one of the bad multiplatform-simple-GUI APIs, badly?
We already have this. The TTY itself is not very special at all. It's just that the applications, traditionally, decide that they should special-case the writing to TTYs (because those, presumably, are human-oriented and should have as little batching as possible). But you, as an application developer, can simply not do this, you know.
If colours were delivered via a sideband, you wouldn't have to know whether the other side was a terminal to disable colours. You could send colours to a file and they wouldn't be stored - or would be stored in RTF format, if you were sending to an RTF file.
The design we use on Linux is very "worse is better". Some mechanisms were developed because they could be developed, and those mechanisms, because they were the ones available, were made to fulfil every purpose they could fulfil, and now we're locked into this design for better or worse.
Windows used to have APIs to directly set text colour. You could set the colour to blue and print some text and it would be blue. You could call a function on a console window object to ask how big the console window was, or to change it. This obviously doesn't compose through pipes or ssh, but Windows doesn't have a pipe culture or ssh culture so that was never a design criterion. They've since deprecated that and moved to the worse-is-better escape-code design, in order to increase compatibility with Linux.