Please consider removing any implicit network calls like the initial "Checking GitHub for updates...". This itself will prevent people from adoption or even trying it any further. This is similar to gnu parallel's --citation, which, albeit a small thing - will scare many people off.
Consider adding pivot and unpivot operations. Mlr gets it quite right with syntax, but is unusable since it doesn't work in streaming mode and tries to load everything into memory, despite claiming otherwise.
Consider adding basic summing command. Sum is the most common data operation, which could warrant its own special optimized command, instead offloading this to external math processor like lua or python. Even better if this had a group by (-by) and window by (-over) capability. Eg. 'qsv sum col1,col2 -by col3,col4'. Brimdata's zq utility is the only one I know that does this quite right, but is quite clunky to use.
Consider adding a laminate command. Essentially adding a new column with a constant. This probably could be achieved by a join with a file with a single row, but why not make this common operation easier to use.
Consider the option to concatenate csv files with mismatched headers. cat rows or cat columns complains about the mismatch. One of the most common problems with handling csvs is schema evolution. I and many others would appreciate if we could merge similar csvs together easily.
Conversions to and from other standard formats would be appreciated (parquet, ion, fixed width lenghts, avro, etc.). Othe compression formats as well - especially zstd.
It would be nice if the tool enabled embedding outputs of external commands easily. Lua and python builtin support is nice, but probably not sufficient. i'd like to be able to run a jq command on a single column and merge it back as another for example.
Inspiration:
- csvquote: https://news.ycombinator.com/item?id=31351393
- teip: https://github.com/greymd/teipWould you be more likely to use this tool if it had even more stuff in it requiring reading even more documentation? That's a genuine question.
As maintainer of qsv, here's my reply:
- Given qsv's rapid release cycle (173 releases over three years), the auto-update check is essential at the moment. Once we reach 1.0, I'll turn it off. For now, given your feedback, I've only made it check 10% of the time.
- Pivot is in the backlog and I'll be sure to add unpivot when I implement it. (https://github.com/jqnatividad/qsv/issues/799)
- I'll add a dedicated summing command with the group by (-by) and window by (-over) capability (https://github.com/jqnatividad/qsv/issues/1514). Do note that `stats` has basic sum as @ezequiel-garzon pointed out.
- With the `enum` command, qsv can achieve what you proposed with `laminate`. E.g. qsv enum --new-column newcol --constant newconstant mydata.csv --output laminated-data.csv
- With the cat rowskey command, qsv can already concatenate files with mismatched headers.
- other file formats. qsv supports parquet, csv, tsv, excel, ods, datapackage, sqlite and more (see https://github.com/jqnatividad/qsv/tree/master#file-formats). Fixed-format though is not supported yet and quite interesting, and have added it to the backlog (https://github.com/jqnatividad/qsv/issues/1515)
- as to "enable embedding outputs of commands", qsv is composable by design, so you can use standard stdin/stdout redirection/piping techniques to have it work with other CLI tools like jq, awk, etc.
Finally, just released v0.120.0 that already incorporates the less aggressive self-update check. https://github.com/jqnatividad/qsv/releases/tag/0.120.0
At minimum, it is not installed by default, so it is already a negative to just using xargs. That it then puts that barrier in my way makes it an easy tool to skip.
Bonus SO post to enhance your fury:
https://stackoverflow.com/questions/61762189/installing-gnu-...
I am wondering who really uses these tools and for what since there are R and python data science tools available?
To do a comparable amount of manipulation in Python takes a lot more boilerplate (imports, command line arguments, diety-can-we-default-to-Int64 already?, etc), plus you have to ensure you have a virtual environment with correct dependencies. Which is more or less standard numpy+pandas, but a single executable tool to do some data workup is always appreciated.
I am never performance constrained, but I have been told that miller is one of the slower tools in this space, but I still reach for it do to its wide format support.
I keep multiple little Python scripts around to do things like sum lists of numbers (think extracting a column with awk, then calculating a sum). Compiled vs an interpreted script really doesn’t matter. What matters is using the right algorithm for the job. R and Python data science libraries like to read in all of the data at once into one single data structure. That’s the anti-pattern to avoid if at all possible.
(But they are very handy for small datasets of complex calculations that require the entire dataset in memory. )
http://hopper.si.edu/wiki/mmti/Starbase
Their tbl format is so trivially close to standard csv that I just convert on the fly back and forth with tiny helper perl scripts.