Instead of using integers IDs for the hexes, we created an encoded version of the ID that has the property that removing a character gets you the containing parent of the cell. This means we can do basic containment queries by querying with a low resolution hex (short string) as a prefix query. If a gps track goes through this larger parent cell, the track will have hexes with the same prefix. You don’t get perfect control of distances because hexes have varying diameters (or rather the approximation, since they aren’t circles they are hexes), but in practice and at scale for a product that doesn’t require high precision, it’s very effective.
I think at the end of this year we’ll have about 6tb of these hex sets in a four node 8 process ES cluster. Performance is pretty good. Also acts as our full text search. Half the time we want a geo search we also want keyword / filtering / etc on the metadata of these trips.
Pretty fun system to build, and the concept works with a wide variety of data stores. Felt like a total hack job but it has stood the test of time.
Thanks uber, h3 is a great library!
Before that existed (pre 1.0 actually), I did something similar with geohashes, which are similar to h3 but based on simple string encoded quad trees. I indexed all the street segments in openstreetmap with that (~800 million at the time) and implemented a simple reverse geocoder. Worked shockingly well.
The geo_shape type uses a bkd tree in binary format. It's heavily optimized for this type of intersects/overlaps queries at scale. Basically does the same thing but using a lot less disk space and memory. It's similar to what you would find in proper GIS databases. Elasticsearch/opensearch also support h3 and geohash grid aggregations on top of geo_shape or geo_point types.
I'm guessing the author is using something like postgresql which of course has similar geospatial indexing support via post gis.
* I'd frame it as "kludge", reserving "hack" for the positive HN sense. :)
If joins are a critical performance-sensitive operation, the most important property of a DGGS is congruency. H3 is not congruent it was optimized for visualization, where congruency doesn’t matter, rather than analytical computation. For example, the article talks about deduplication, which is not even necessary with a congruent DGGS. You can do joins with H3 but it is not recommended as a general rule unless the data is small such that you can afford to brute-force it to some extent.
H3 is great for doing point geometry aggregates. It shines at that. Not so much geospatial joins though. DGGS optimized for analytic computation (and joins by implication) exist, they just aren’t optimal for trivial visualization.
That said, this feels like an issue with rendering geometry rather than with the index itself. I’m curious to hear more about why you think the lack of congruency affects H3’s performance for spatial joins. Under the hood, it’s still a parent–child hierarchy very similar to S2’s — H3 children are topological rather than geometric children (even though they still mostly overlap).
Congruency allows for much more efficient join schedules and maximizes selectivity. This minimizes data motion, which is particularly important as data becomes large. Congruent shards also tend to be more computationally efficient generally, which does add up.
The other important aspect not raised here, is that congruent DGGS have much more scalable performance when using them to build online indexes during ingestion. This follows from them being much more concurrency friendly.
Not familiar with geo stuff / DGGS. Is H3 not congruent because hexagons, unlike squares or triangles, do not tile the plane perfectly?
I mean: could a system using hexagons ever be congruent?
Notably A5, which has the property that each cell covers exactly the same area, even when stretched towards the north and south pole. Useful for certain spatial analysis where you need every cell to have the same size.
They may well be using some data storage where spatial indexing is not possible or standard. Geoparquet is a common one now - a great format in many ways but spatial indexing isnt there.
Postgres may be out of fashion but still an old fashioned postgis server is the simplest solution sometimes.
- Generate z-values for spatial objects. Points -> a single z-value at the highest resolution of the space. Non-points -> multiple z-values. Each z-value is represented by a single integer, (I use 64 bit z-values, which provide for space resolution of 56 bits.) Each integer represents a 1-d range. E.g. 0x123 would represent 0x123000 through 0x123fff
- Spatial join is basically a merge of these z-values. If you are joining one spatial object with a collection of N spatial objects, the time is logN. If you are joining two collections, then it's more of a linear-time merge.
For more information: PROBE Spatial Data Modeling and Query Processing in an Image Database Application. IEEE Trans. Software Eng. 14(5): 611-629 (1988)
An open source java implementation: https://github.com/geophile/geophile. (The documentation includes a number of corrections to the published algorithm.)
No, this city isn’t 4 miles from my city. There is a literal lake between us. It’s 10+ miles.
Please, invent something, do precompute, but just avoid naive-ish searches.
At 500 stations:
- H3: 218µs, 4.7KB, 109 allocs
- Fallback: 166µs, 1KB, 37 allocs
- Fallback is 31% faster
At 1000 stations:
- H3: 352µs, 4.7KB, 109 allocs
- Fallback: 312µs, 1KB, 37 allocs
- Fallback is 13% faster
At 2000 stations:
- H3: 664µs, 4.7KB, 109 allocs
- Fallback: 613µs, 1KB, 37 allocs
- Fallback is 8% faster
At 4500 stations (real-world scale):
- H3: 1.40ms, 4.7KB, 109 allocs
- Fallback: 1.34ms, 1KB, 37 allocs
- Fallback is 4% faster
Conclusion: The gap narrows as station count increases. At 4500 stations they're nearly equivalent. H3 has fixed overhead (~4.7KB/109 allocs for k=2 ring), while fallback scales linearly. The crossover point where H3 wins is likely
around 10-20K entries.Updating an R-tree is log(n) just like any other index.
I wonder how it compare with geohashing, I know it is not as efficient in term of partitioning and query end up weird since you need to manage decoding neighbor cells but finding all element of a cell is a "starts with" query which allow to put data effectively on most nosql databases with some sort of text sorting