They're the same in theory. The factor is the amount the original length is multiplied by (in place of *2 or *1.5 constants), and obviously x/√x = √x.
length * (1 + 1/√(length)) = length + length/√(length) = length + √(length)
Addition is cheaper than the multiplication, and since we're approximating the sqrt we probably want to avoid calculating its reciprocal and multiplying. Our actual implementation is an approximation of 1+1/√(length)
Maybe I could've worded it a bit better.
More detail is in the RAOTS paper. It organizes the data blocks into so called logical "superblocks" (which have no material representation). Each superblock is ~n data blocks each of length n. In practice it's not exactly this because when we have an odd number of bits, say 5, we have to split it either 2:3 or 3:2 (for the log2 of num_blocks:block_size). The paper choses the former (same as above), but we could potentially also do it the other way where you have more blocks of smaller average size, but it would be a bit more awkward to index into the data blocks themselves since you'd have have to test for odd/even - which we don't do above because of the /2 integer division, and it would make the index block larger which we'd want to avoid because we have to reallocate it.
So in the approach taken, avg_block_size > √(length) and num_blocks < √(length), but avg_block_size * num_blocks == length still holds, and avg_block_size ≈ √(length) ≈ num_blocks.
If we're reallocating rather than using an index block, replace `num_blocks` with number of times we have to call the allocator.