I must say that LZ4 definitely stands out for both compression and uncompression speed, while still being able to cut the data size in half, making it probably quite suitable for life filesystems and network protocols. Particularly interesting was also comparing Zstd and LZ4[3], the former being substantially slower, but at the same time achieving a compression ratio somewhere between zlib and xz, while beating both in time (in my benchmark at least).
[1] https://github.com/AgentD/squashfs-tools-ng/blob/master/doc/...
[2] https://github.com/AgentD/squashfs-tools-ng/blob/master/doc/...
[3] https://github.com/AgentD/squashfs-tools-ng/blob/master/doc/...
Actually, no. lz4 is less suitable than zstd for filesystems.
BTW, lz4 is present in many mozilla tools like thunderbird: it's represented by its bastard child lz4json, which is diverging by just the headers don't work with regular lz4 tools
> achieving a compression ratio somewhere between zlib and xz, while beating both in time (in my
Your observation is correct: zstd is now standard and the default on openzfs 2.0, replacing lz4.
The 19 compression variants offer more flexibility than just lz4- another strength is the decode time is not a function of the compression factor, which is something good on coldish storage that's rarely updated.
Are you sure? The default compression level has always been "off", but when switched on - the default has been lz4 for about 5 years. Zstd support was added less than a year ago and there are still a lot of things that need to be fixed before one could even suggest that it might be a sane default. I like zstd, but I like my uncorrupted data more. I know that compatibility between compressor versions and pools is a concern, and there are also the compression performance problems with the way zstd handles zfs block sizes. Thankfully lz4 works great for zfs and has for many years now.
https://github.com/openzfs/zfs/blob/master/include/sys/zio.h...
Why's that? What benefit would I get from switching? Is it workload-dependent?
EDIT: To be clear, I'm not disagreeing; if zstd will work better, I want to know about it so that I can switch my pools to use it.
Still zstd offers way better compression and got variable difficulty factor: https://github.com/facebook/zstd Decompression is always fast, but you can trade off compression vs. ratio factor.
In general if send data over network zstd is quite profitable. Even network attached disk AWS EBS or AWS S3 it can be a hugely profitable.
This is the case in ClickHouse: if data is compressed, we decompress it in blocks that fit in CPU cache and then perform data processing inside cache; if data is uncompressed, larger amount of data is read from memory.
Strictly speaking, LZ4 data decompression (typically 3 GB/sec) is slower than memcpy (typically 12 GB/sec). But when using e.g. 128 CPU cores, LZ4 decompression will scale up to memory bandwidth (typically 150 GB/sec) as well as memcpy. And memcpy is wasting more memory bandwidth by reading uncompressed data while LZ4 decompression reads compressed data.
I'm not into C/C++ for years though and now I wouldn't grok the code, sadly.
LZ4 is one of the algorithms supported by Zram in Linux. It's fairly popular for people using things like a Raspberry PI that have a smaller amount of RAM.
https://www.techrepublic.com/article/how-to-enable-the-zram-...
I configure my own kernel on Arch and Zswap is enabled by default there, too.
[0] https://klarasystems.com/articles/openzfs1-understanding-tra...
This article refers to the padding on the final record as "slack space" and states that you need to enable compression to eliminate the slack space.
https://arstechnica.com/information-technology/2020/05/zfs-1...
See also:
https://old.reddit.com/r/zfs/comments/gzcysy/h264_h265_media...
For image, video and audio there are more efficient compressions taking advantage of those formats.
That said it can be quite tricky to rewrite something to efficiently work on compressed data.
I always assume S3 storage was compressed on the fly by AWS regardless of how the client chooses to store his/her data.
Per the table at [0], zstd provides only a slight improvement in compression ratio, and in exchange is about half the speed of lz4.
They both have their place.
zstd, brotly, snappy were seemingly all made with high end x86 capabilities in mind.
To put it simplistically, if you have a file which is a (good) random mix of an equal number A and B characters, LZ4 won't be able to compress it significantly, while Zstd will compress it 8:1 converging to an encoding where a '1' bit is A, and a '0' bit is B.
I checked it. LZ4 is still reducing the size to half, no idea why half. So for 10 MB file it compresses to 5 MB.
Edit: checked with highest compression and it compresses 1MB file to 185KB. So what the parent wrote is false.
Yes, however there is usually no facility to train your compression algo with most tools using ZSTD.
The way this would probably work without this facility though, say, in a database, is that the dictionary is maintained internally and constructed on the fly from the field data and not exposed to users. Although, I don't know if you'd have to keep every version of the dictionary in order to successfully decompress old data? If so then perhaps this is a niche feature
lz4 >> snappy
zstd >> brotli
https://corecursive.com/frontiers-of-performance-with-daniel...
Some people are just really good at performance sensitive stuff.
It makes it very clear where LZ4 fits into comparisons with compression speed, decompression speed and compression ratio
(Linking to this more for the overview than the Windows tool in itself.)
So there's a bunch of forks with useful features that'll never be adopted because there's no collaboration.
At least that's what I could tell when I looked into it
The extra decompression on top of the data fetch latency can be quite noticeable. Sometimes that can be offset if the compression ratio is affecting a hitrate, and thereby decreasing the latency. The problem of course is that even with 10M IOP storage devices frequently it is really latency and an inability to keep 100k requests outstanding that limit perf to one's IO turnaround latency.
Put another way, compressed ram and disk are really popular in systems which are RAM constrained, or bandwidth limited because the cost of fetching 2x the data vs 1x and decompressing it is a win (think phones with emmc). The problem is that this doesn't really make sense on high end NVMe (or for that matter desktops/servers with a lot of RAM) where the cost of fetching 8k vs 4k is very nearly identical because the entire cost is front loaded on the initial few bytes, and after than the transfer overhead is minimal. Its even hard to justify on reasonable HD/RAID systems too for bandwidth tests since any IO that requires a seek by all the disks will then tend to flood the interface. AKA it takes tens of ms for the first byte, but then the rest of it comes in at a few GB/sec and decompressing at faster rates takes more than a single core.
edit: And to add another dimension, if the workload is already CPU bound, then the additional CPU overhead of compress/decompress in the IO path will likely cause a noticeable hit too. I guess what a lot of people don't understand is that a lot of modern storage systems are already compressed at the "hardware" layer by FTL's/etc.
https://github.com/strigeus/ipzip - TCP/IP Packet Compressor with LZ4 support
https://github.com/centaurean/density - Extremely fast de/compression
I was thinking of using LZ4, but it doesn't really work that great on floating point, and images are already compressed (png, jpg, and even BCn, can't be compressed much further). So idk. Good thing about lz4 is that it's very simple[0] and probably faster then memcpy().
S3TC is block compression, so if there is repeating data in images it will compress quite well.
https://store.steampowered.com/news/app/991270/view/18064466...
I've yet to really understand what kind of footprint LZ4 uses, and if it's dependent on dictionary size used to compress. What if I have, say, 4KB that I could use to store in-place decompression. Is that related to the compression ratio?
[1] https://community.arm.com/developer/ip-products/processors/b...
Although implementations arein rust, I assume the provided benchmarks are representative of any optimised implementation...
Many compressor algorithms are compared on several data sets.
The results tables show compressed size, compression and decompression times for a number of normal and pathological cases.
Get a good feel about strengths and weaknesses.
Some algs really go downhill in pathological cases, such as with random data.
Do consider encryption too though you probably want to do that on the compressed data set where possible.
Sometimes external encryption means you will be stuck with something close to pathological...