Our algorithm is close to that;
1. We don't partition it into fixed size blocks, but rather index directly into the array
2. The site calculates a salted hash and sends us just the hash. We recommend at least a 32 byte CS-PRNG salt
3. We HMAC the hash with a 64-byte site-specific token (AppID) to produce the seed
4. We generate 64 uniformly distributed locations from the seed and perform 64 reads of 64 bytes each to form a 4096 byte buffer which we HMAC with the AppID to produce a second salt.
5. The site uses this second salt to HMAC their original hash, and store that.
This design allows multiple sites to securely share a single data pool and also means that our service a) does not see usernames or passsords, b) does not know if a login is valid/invalid, c) cannot do anything to make an invalid login look valid to the site.
There are some additional details to handle upgrading hashes as the data pool grows, and also to provide virtual private data pools for each site (so I can give you a copy of your data pool if you ever want to self-host). This is all detailed in [1] above.