Depending on how many times you will write to the row, you could avoid having to do a merge on write by using APPEND_IF_FITS and just merging the byte arrays when you read.
It's nice that FDB gives you so much low level flexibility, you can do whatever you feel fits your use case.
> what is the volume your system is operating at?
This varies, as our workload is dynamic in that anyone at any time can inject a query for the data stream, but for this sake lets say 5k.
> Also how does it work for skews?
Foundation does a magnificent job automatically detecting and physically relocating skew. However, to mitigate write skew, I use time bucketing techniques where party of the key is a MURMUR3 hash of the minute_of_hour so that heavy write loads can only affect a server for one minute. This has helped with certain metrics.
> Lets say you need to update HLL for a key that is heavily skewed, does your FDB transaction unwind fast enough not to slow down the whole system?
There isn't really a concept of an HLL (or key) being heavily skewed. A key lives on a single sever (or multiple, depending on replication). Essentially, when I want to merge additional HLL content into one already store, I just read it, deserialize it, merge it with the one I have and then write the result back to FDB. Because of transactions I can ensure that nobody else is doing the same exact thing I am doing. If there were...then mine (or their) transaction would fail, and retry. The retry is important because it would reattempt the same logic, except the result I got from the database would be the merged result from somebody else. This allows you to ensure that idempotent / atomic operations happen as you'd expect.
Lets say you are counting distinct ips used by `users` using HLL. Lets say you start getting DDOSed by certain users since I am assuming you are not doing s shuffle before writing to FDB, you will be locking the user, reading HLL, deserializing, merging and writing back to FDB from multiple machines which will results in a lot of rejected transaction and retries. My question is whether retries unwind fast enough or you will end up dropping data on the floor as you will exhaust the retry count
Additionally, depending on the write rate and the size of the data being written to the HLL, it may be worth only actually writing it out periodically and keeping a log you read at runtime of recent values.
There is a trade off between needlessly re-writing mostly unchanged data and read performance that is similar to the IO amplification trade off in log structured merge tree-derived storage engines.