These days it generally is better to prefer Zstandard to zlib/gzip for many reasons. And if you need seekable format, consider squashfs as a reasonable choice. These stand on the shoulders of the giants of zlib and zip but do indeed stand much higher in the modern world.
I was a silly kid.
I'd agree for new applications, but just like MP3, .gz files (and by extension .tar.gz/.tgz) and zlib streams will probably be around for a long time for compatibility reasons.
$ echo meow >cat
$ echo woof > dog
$ gzip cat
$ gzip dog
$ cat cat.gz dog.gz >animals.gz
$ gunzip animals.gz
$ cat animals
meow
woofIt can be very useful: https://github.com/google/crfs#introducing-stargz
It could be a typo, though I think when we say something "isn't specially/specifically/particularly useful" we mean "compared to the set of all features, specifically this subset feature is not that useful" not that the feature isn't useful for specific things
Fun fact, tar-files are also (semi-) concatenable, you'll just need to `-i` when decompressing. This also means compressed (using gz/zstd) tarfiles are also (semi-)concatenable!
Is there a limit in the default gunzip implementation? I'm aware of the concept of ZIP/tar bombs, but I wouldn't have expected gunzip to ever produce more than one output file, at least when invoked without options.
(I have been using 7zip for about 15 years to produce archive files that have an index and can quickly extract a single file and can use multiple cores for compression, but I would love to have an alternative, if one exists).
Even if a question was super similar to one that was previously asked has value in exactly that it might be phrased slightly better and be a closer match to what people were Googling.
So with regards to the internet: The 90s and early 00s were great, then the internet became mainstream and it all just became Cable TV 2.0.
I know most people are pessimistic that LLMs will lead to SO and the web in general to be overrun by hallucinated content and AI-training-on-AI-ouroboros, but I wonder if it might instead allow for curious people to query an endlessly patient AI assistant about exactly this kind of information. (A custom GPT perhaps?)
But almost nobody reads that.
https://stackexchange.com/users/1136690/mark-adler#top-answe...
> This post is packed with so much history and information that I feel like some citations need be added
> I am the reference
(extracted a part of the conversation)
I think it makes perfect sense as a general and strict policy for an encyclopedia. It would simply be too hard to audit every case to check if it's someone like you, or a crank.
But why is a reference to "[1] Blog post by XXX" (or, even worse, "[1] Blog post by YYY based on their tentative understanding of XXX") a more authoritative source than "[1] Added to Wikipedia personally by XXX"? Of course, Wikipedia potentially has no proof that the editor was actually XXX in the latter case; but they have even less proof that a blog post purporting to be by XXX actually is.
(I was thinking of a creating a version control system whose .git directory equivalent is basically an archive file that can easily be emailed, etc.)
- https://github.com/onekey-sec/unblob/blob/main/unblob/handle...
- https://github.com/onekey-sec/unblob/blob/main/unblob/handle...
- https://github.com/onekey-sec/unblob/blob/main/unblob/handle...
Disclaimer: I'm the author.
Salty form: They're all quite slow compared to modern competitors.
lz4 and zstd have both been very popular since their release, they're similar and by the same author, though zstd has had more thorough testing and fuzzing, and is more featureful. lz4 maintains an extremely fast decompression speed.
Snappy also performs very well, with zstd and snappy having very close performance with tuning to achieve comparable compression levels.
In recent years Zstd has started to make heavy inroads in broader usage in OSS with a number of distro package managers moving to it and observing substantial benefits. There are HTTP extensions to make it available which Chrome originally resisted but I believe it's now finally coming there too (https://chromestatus.com/feature/6186023867908096).
In gaming circles there's also Oodle and friends from RAD tools which are now available in Unreal engine as builtin compression offerings (since 4.27+). You could see the effects of this in for example Ark Survival Evolved (250GB) -> Ark Survival Ascended (75GB, with richer models & textures), and associated improved load times.
FOO=$(tar cf - folderToCompress | gzip | base64)
echo $FOO | base64 - d | zcat | tar xf -