There may be better constants than 2^64/phi, perhaps some large prime number with roughly equal number of one and zero bits could also work.
This will prevent bucket collisions on hash table resizing that may lead to "accidentally quadratic" behavior [2], while not requiring rehashing with a different salt.
I didn't do detailed analysis on whether it helps on hash table merging too, but I think it would.
[0] https://probablydance.com/2018/06/16/fibonacci-hashing-the-o... [1] https://news.ycombinator.com/item?id=43677122 [2] https://accidentallyquadratic.tumblr.com/post/153545455987/r...
Fibonacci hashing is really adding another multiplicative hashing step followed by dropping the bottom bits using a shift operation instead of the top bits using an and operation. Since it still works by dropping bits, items that were near before the resize will still be near after the resize and it won't really change anything.
If you merge linear probed tables by iterating in sorted hash order then you are matching the storage order and can congest particular parts of the table and cause the linear probing worse case behaviour.
By changing the iteration order, or salting the hash, you can avoid this.
Of course chained hash tables don't suffer from this particular problem.
My quick thought is that hash tables ought keep an internal salt hidden away. This seems good to avoid 'attacks' as well as speeding up merging etc. The only downside I can think of is that the creation of the table needs to fetch a random salt that might not be quick, although that can alleviated by allowing it to be set externally in the table creation so people who don't care can set it to 0 or whatever. What am I missing?
The keys are hidden/secret to the system external to the application.
> Abseil
> Rust standard
> hashbrown
These hash tables are not based on plain linear probing, they use something that's essentially quadratic probing done in chunks. Not sure about the others but they might be doing something similar.
Let h0 be the larger table, and h1 the smaller. N = len(h0), M = len(h1). Pretend the elements of the tables are sequentially indexed. Element[0] is h0[0], Element[N] is h1[0], etc.
h0' = Resize(h0, (N+M)*capacity_factor)
for x in 0...(range):
y = permute(x, 0, (N+M)*capacity_factor)
if(y >= N) move_to(h0'[y], element[x])
One allocation and you move the minimum number of elements needed to eliminate primary clustering. Elements in h0 that aren't moved would presumably remain correctly indexed. You have to move the remaining elements of h1 as well, but that cluttered things.Any randomish permutation works, from basic ones up to cryptographic. If your permutation only works on certain powers of two, iterate it until the result is in range.
HashMap implements Extend, so just h0.extend(h1) and you're done, the people who made your HashMap type are much better equipped to optimize this common operation.
In a new enough C++ in theory you might find the same functionality supported, but Quality of Implementation tends to be pretty frightful.
Are you sure? I'm not very used to reading Rust stdlib, but this seems to be the implementation of the default HashMap extend [1]. It just calls self.base.extend. self.base seems to be hashbrown::hash_map, and this is the source for it's extend [2]. In other words, does exactly the same thing, just iterates through hash map and inserts it.
Maybe I'm misreading something going through the online docs, or Rust does the "random seed" thing that abseil does, but just blinding assuming something doesn't happen "because Rust" is a bit silly.
[1]: https://doc.rust-lang.org/src/std/collections/hash/map.rs.ht...
[2]: https://docs.rs/hashbrown/latest/src/hashbrown/map.rs.html#4...
Note that the worst case is we ate a single unneeded growth, while the best case is that we avoided N - 1 grows where N may be quite large.
Who's to say I'm not the one making the hashtable? There are plenty of real-world reasons the standard library hashtable may be either inaccessible or unsuitable.
Furthermore, the idea that "oh, honey, it's too hard, smart people did it for you" is insufferable and needs to die. When I'm the one making something, I have dramatically more information about the problem I'm trying to solve than the author of a hashtable library, and am therefore much better equipped to make design decision tradeoffs.
Please stop perpetuating the idea that 'just use a library' is unilaterally the best option. Sometimes, it's not.
And indeed if you have your own custom operation you want, it may well make sense for you to implement it on both your own types and stdlib types.